Linux compiling sources
March 17, 2010 Leave a comment
I was not able to compile on my Ubuntu Distro today until I ran sudo apt-get install build-essential
In the eyes of a Rockstar.
March 17, 2010 Leave a comment
I was not able to compile on my Ubuntu Distro today until I ran sudo apt-get install build-essential
February 24, 2010 Leave a comment
Flexibility Schema Master Operations (FSMO) Roles
I was recently asked a question about FSMO roles and so I thought it would be a good time to touch on this topic again as it relates to Windows 2008 R2 and we see not much has changed.
They are further classified in two
1. Forest Roles
· Schema Master – As name suggests, the changes that are made while creation of any object in AD or changes in attributes will be made by single domain controller and then it will be replicated to another domain controllers that are present in your environment. There is no corruption of AD schema if all the domain controllers try to make changes. This is one of the very important roles in FSMO roles infrastructure.
· Domain Naming Master – This role is not used very often, only when you add/remove any domain controllers. This role ensures that there is a unique name of domain controllers in environment.
2. Domain Roles
· Infrastructure Master – This role checks domain for changes to any objects. If any changes are found then it will replicate to another domain controller.
· RID Master – This role is responsible for making sure each security principle has a different identifier.
· PDC emulator – This role is responsible for Account policies such as client password changes and time synchronization in the domain
Where these roles are configured?
1. Domain wide roles are configured in Active Directory users and computers. Right click and select domain and here option is operations master.
2. Forest roles Domain Naming master is configured in active directory domain and trust right click and select operations master. It will let you know the roles.
3. (c)Forest roles Schema Master is not accessible from any tool as they want to prevent this. Editing schema can create serious problem in active directory environment. To gain access you need to create snap-in and register dll file by regsvr32 schmmgmt.dll.
Seizing of Roles
In case of failures of any server you need to seize the roles. This is how it can be done:
For Schema Master:
Go to cmd prompt and type ntdsutil
1. Ntdsutil: prompt type roles to enter fsmo maintenance.
2. Fsmo maintenance: prompt type connections to enter server connections.
3. Server connections: prompt, type connect to server domain controller, where
Domain controller is the name of the domain controller to which you are going to transfer the role
4. Server connections: prompt, type quit to enter fsmo maintenance.
5. Fsmo maintenance: prompt, type seize schema master.
After you have Seize the role, type quit to exit NTDSUtil.
For Domain Naming Master:
Go to cmd prompt and type ntdsutil
1. Ntdsutil: prompt type roles to enter fsmo maintenance.
2. Fsmo maintenance: prompt type connections to enter server connections.
3. Server connections: prompt, type connect to server domain controller, where
Domain controller is the name of the domain controller to which you are going to transfer the role
4. Server connections: prompt, type quit to enter fsmo maintenance.
5. Fsmo maintenance: prompt, type seize domain naming master.
After you have Seize the role, type quit to exit NTDSUtil.
For Infrastructure Master Role:
Go to cmd prompt and type ntdsutil
1. Ntdsutil: prompt type roles to enter fsmo maintenance.
2. Fsmo maintenance: prompt type connections to enter server connections.
3. Server connections: prompt, type connect to server domain controller, where
Domain controller is the name of the domain controller to which you are going to transfer the role
4. Server connections: prompt, type quit to enter fsmo maintenance.
5. Fsmo maintenance: prompt, type seize infrastructure master.
After you have Seize the role, type quit to exit NTDSUtil.
For RID Master Role:
Go to cmd prompt and type ntdsutil
1. Ntdsutil: prompt type roles to enter fsmo maintenance.
2. Fsmo maintenance: prompt type connections to enter server connections.
3. Server connections: prompt, type connect to server domain controller, where
Domain controller is the name of the domain controller to which you are going to transfer the role
4. Server connections: prompt, type quit to enter fsmo maintenance.
5. Fsmo maintenance: prompt, type seize RID master.
After you have Seize the role, type quit to exit NTDSUtil.
For PDC Emulator Role:
Go to cmd prompt and type ntdsutil
1. Ntdsutil: prompt type roles to enter fsmo maintenance.
2. Fsmo maintenance: prompt type connections to enter server connections.
3. Server connections: prompt, type connect to server domain controller, where
Domain controller is the name of the domain controller to which you are going to transfer the role
4. Server connections: prompt, type quit to enter fsmo maintenance.
5. Fsmo maintenance: prompt, type seize PDC.
After you have Seize the role, type quit to exit NTDSUtil.
February 4, 2010 Leave a comment
This tutorial assumes that the new physical hard drive has been installed on the system and is visible to the operating system. The best way to do this is to enter the system BIOS setup during the boot process and ensure that the BIOS sees the disk drive. Sometimes the BIOS will provide a menu option to scan for new drives. If the BIOS does not see the disk drive double check the connectors and jumper settings (if any) on the drive.
Assuming the drive is visible to the BIOS it should automatically be detected by the operating system. Typically, the disk drives in a system are assigned device names beginning hd or sd followed by a letter to indicate the device number. For example, the first device might be /dev/sda, the second /dev/sdb and so on.
The following is output from a system with only one physical disk drive:
ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda5
This shows that the disk drive represented by /dev/sda is itself divided into three partitions, represented by /dev/sda1, /dev/sda2 and /dev/sda5.
The following output is from the same system after a second hard disk drive has been installed and detected by the operating system:
ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda5 /dev/sdb
As shown above, the new hard drive has been assigned to the device file /dev/sdb. At this point the drive has no partitions shown (because we have yet to create any).
The next step is to create one or more Linux partitions on the new disk drive. This is achieved using the fdisk utility which takes as a command-line argument the device to be partitioned (in this case /dev/sdb):
sudo fdisk /dev/sdb
[sudo] password for johndoe:
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xc2fe324b.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help):
In order to view the current partitions on the disk enter the p command:
Command (m for help): p
Disk /dev/sdb: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xc2fe324b
Device Boot Start End Blocks Id System
As we can see from the above fdisk output, the disk currently has no partitions because it is a previously unused disk. The next step is to create a new partition on the disk, a task which is performed by entering n (for new partition) and p (for primary partition):
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4):
In this example we only plan to create one partition which will be partition 1. Next we need to specify where the partition will begin and end. Since this is the first partition we need it to start at cylinder 1 and since we want to use the entire disk we specify the last cylinder as the end. Note that if you wish to create multiple partitions you can specify the size of each partition by cylinders, bytes, kilobytes or megabytes.
Partition number (1-4): 1
First cylinder (1-261, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):
Using default value 261
Now that we have specified the partition we need to write it to the disk using the w command:
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
If we now look at the devices again we will see that the new partition is visible as /dev/sdb1:
ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda5 /dev/sdb /dev/sdb1
Now that the disk has been successfully partitioned, the next step is to create a file system on our new partition.
We now have a new disk installed, it is visible to Ubuntu and we have configured a Linux partition on the disk. The next step is to create a Linux file system on the partition so that the operating system can use it to store files and data. The easiest way to create a file system on a partition is to use the mkfs.ext3utility which takes as arguments the label and the partition device:
sudo mkfs.ext3 -L /photos /dev/sdb1
mke2fs 1.40.2 (12-Jul-2007)
Filesystem label=/photos
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
262144 inodes, 524112 blocks
26205 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Now that we have created a new file system on the Linux partition of our new disk drive we need to mount it so that it is accessible. In order to do this we need to create a mount point. A mount point is simply a directory into which the file system will be mounted. For the purposes of this example we will create a/photos directory to match our file system label (although it is not necessary that these values match):
sudo mkdir /photos
The file system may then be manually mounted using the mount command:
sudo mount /dev/sdb1 /photos
Running the mount command with no arguments shows us all currently mounted file systems (including our new file system):
mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
/sys on /sys type sysfs (rw,noexec,nosuid,nodev)
varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755)
varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777)
udev on /dev type tmpfs (rw,mode=0755)
devshm on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
lrm on /lib/modules/2.6.22-14-generic/volatile type tmpfs (rw)
securityfs on /sys/kernel/security type securityfs (rw)
/dev/sdb1 on /photos type ext3 (rw)
In order to set up the system so that the new file system is automatically mounted at boot time, an entry needs to be added to the /etc/fstab file. This may be edited by issuing the following command in a terminal window:
sudo vi /etc/fstab
The following example shows an /etc/fstab file configured to automount our /photos partition:
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
# /dev/sda1
UUID=4a621e4d-8c8b-4b39-8934-98ab8aa52ebc / ext3 defaults,errors=remount-ro 0 1
# /dev/sda5
UUID=9c82bf09-c6f7-4042-8927-34e46518b224 none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec 0 0
/dev/sdb1 /photos auto defaults 0 0
Credits and copyright on technical research go to Techotopia.com, 2009.
February 3, 2010 1 Comment
The MSSQL Challenge
Today I submitted a challenge for our MSSQL DBA’s in house, while I cannot disclose all of the information for confidentiality I would like to share some of it. Take part in my challenge and I would welcome you to a free lunch in New York.
So it starts off with a little story as it relates to MSSQL services.
When closing deals we have live meetings in person or conference calls where executive decision makers and engineers will meet and discuss the game plan for managed hosting solutions. During this point experts are challenged and exercise their ideas to the best of their ability figuring out every possible case scenario before making a decision to move forward.The most important role of an engineer during this meeting is to have a full understanding of the proposal an understanding of the client’s needs and lastly to be as accurate as possible with the correct answers to questions. It is in the best interest of both sides to ask the most challenging cases and most of these usually are in relation to security or disaster recovery.
The worst response is to give the wrong answer is the second worst would be to respond with “Let me research that and I’ll get back to you”.
So back to my original point, the MSSQL Challenge.Key Notes — With SQL 2000 the process of failover in a cluster took several minutes depending on how much data was in the transaction log to be rolled forward and back.
With SQL 2005 & 2008 Enterprise Edition, the startup is much faster because they roll forward any completed transactions, bring the database online and then roll back any completed transactions. With the Standard Edition of SQL 2005 & 2008 the failover process brings the database online after the transactions have been rolled backward and forward.February 2, 2010 Leave a comment
Data Migration Services
I have been working with CSV files importing massive amounts of data between two customer relations management suites. The two I am working with are Microsoft CRM 4.0 and Salesforce CRM. A great idea for a project I recently completed was finding a way to export all new leads generated in SF over a seven day period and import them into CRM. I created a CRM virtual machine inside of a vSphere cloud and this machine acted as a domain controller, database server, web server and application server.January 25, 2010 Leave a comment
Microsoft Security Essentials provides real-time protection for your home PC that guards against viruses, spyware, and other malicious software. This is a free download from Microsoft that is simple to install, easy to use, and always kept up to date so you can be assured your PC is protected by the latest technology.
Up until now, many of us may still be using 3rd party applications to provide a complete malware/spyware/virus suite solution. However I highly recommend taking a look at MSE as a new successor to your workstation needs. MSE will install on any Windows XP SP2+, Vista or Windows 7 operating system. Recently I was speaking with a friend who had a compromised machine, infected with spyware and viruses. The success rate of MSE is now at 100% with a full recovery of their system and no loss of data. Here is a screenshot from my virtual machines installation. In an enterprise or server environment I recommend ESET’s NOD32 http://www.eset.com/.January 4, 2010 Leave a comment
Happy New Year, 2010
I was reading the latest SQL Server Magazine and there is a great feature cover story on MySpace in regards to the infrastructure with tons of great information in details regarding the database. I have always known that MySpace operated using Microsoft’s SQL Server as the DB engine of choice.December 21, 2009 Leave a comment
Throughout my experiences I have always worked on building my strongest customer relations through migration services. How important are migration services?
Hosting companies and consultants alike may offer a broad range of migration services however there may be a level of uncertainty or confidence going forward. I have worked with partners and competitors alike whose migration services have ranged from site visits to the customer site or the hosting facility as well as complete remote satellite migrations with multiple participants across the globe. The most critical understand in a migration is that the reputation in committing to do so means everything.– How much time will it take?
– How much does it cost?
– Security?
When it’s done should never be stated, an approximation should be as close as possible and in any event a discovery process in order to have a more accurate response to this question before hand.
This should also be where it is discussed whether the migration is to take place over a single business day an entire weekend or over the course of several nights during off peak hours depending on the nature of the business that is being migrated.
December 11, 2009 2 Comments
For product planners at Dell, HP, IBM, Juniper and Sun, the talk about fabric-based servers moved from philosophical discussion to urgent-must-take-action status when Cisco introduced the Unified Computing System.
The presence of UCS from the networking giant is tangible evidence that a large piece of the future server value proposition will be intellectual property related to the network embedded in the server. Blade server vendors pioneered embedded server networks by integrating adapter cards and switches from networking vendors. If Cisco is right, successful server vendors in the future will separate themselves from their competitors with their own sophisticated networking technology that is needed to connect virtual servers, networks and storage residing in environments ranging from a single chassis to part of a public cloud that stretches around the globe.None of the aforementioned server and networking vendors, including HP with the acquisition of 3Com, has introduced their own FCoE intellectual property with which to compete against Cisco. I believe this is why all of these vendors still covet the Brocade portfolio that spans Ethernet and Fibre Channel, servers to fabric, and from core to edge.
Source: Frank Berry @ networkcomputing.comDecember 10, 2009 Leave a comment
Amazon Kindle DX HOHOHO Happy Holidays
Santa never sleeps and in the spirit of giving Logicworks has their own Santa.