Linux compiling sources

I was not able to compile on my Ubuntu Distro today until I ran sudo apt-get install build-essential

Posted via email from IT Rockstar

Flexibility Schema Master Operations (FSMO) Roles

Flexibility Schema Master Operations (FSMO) Roles


I was recently asked a question about FSMO roles and so I thought it would be a good time to touch on this topic again as it relates to Windows 2008 R2 and we see not much has changed. 

What is a FSMO Role? Active Directory it operates because of domain controllers they contain information which is delegated to machine principals both as a whole or individually spread for performance and reliability.  These various roles exist for providing the the responsibilities within the infrastructure for delegated machines.

I’m including information which is sourced from Microsoft White Papers that explains what the roles are used for and how to manage them.  I find this a great reference should the need arise and a new domain controller is being introduced or a recovery process is underway.

There are five roles:

They are further classified in two

1. Forest Roles

·         Schema Master – As name suggests, the changes that are made while creation of any object in AD or changes in attributes will be made by single domain controller and then it will be replicated to another domain controllers that are present in your environment. There is no corruption of AD schema if all the domain controllers try to make changes. This is one of the very important roles in FSMO roles infrastructure.

·         Domain Naming Master – This role is not used very often, only when you add/remove any domain controllers. This role ensures that there is a unique name of domain controllers in environment.

2. Domain Roles

·         Infrastructure Master – This role checks domain for changes to any objects. If any changes are found then it will replicate to another domain controller.

·         RID Master – This role is responsible for making sure each security principle has a different identifier.

·         PDC emulator – This role is responsible for Account policies such as client password changes and time synchronization in the domain


Where these roles are configured?

1. Domain wide roles are configured in Active Directory users and computers. Right click and select domain and here option is operations master.

2. Forest roles Domain Naming master is configured in active directory domain and trust right click and select operations master. It will let you know the roles.

3. (c)Forest roles Schema Master is not accessible from any tool as they want to prevent this. Editing schema can create serious problem in active directory environment. To gain access you need to create snap-in and register dll file by regsvr32 schmmgmt.dll.

Seizing of Roles

In case of failures of any server you need to seize the roles. This is how it can be done:

For Schema Master:

Go to cmd prompt and type ntdsutil

1. Ntdsutil: prompt type roles to enter fsmo maintenance.

2. Fsmo maintenance: prompt type connections to enter server connections.

3. Server connections: prompt, type connect to server domain controller, where 
Domain controller is the name of the domain controller to which you are going to transfer the role

4. Server connections: prompt, type quit to enter fsmo maintenance.

5. Fsmo maintenance: prompt, type seize schema master.

After you have Seize the role, type quit to exit NTDSUtil.

For Domain Naming Master:

Go to cmd prompt and type ntdsutil

1. Ntdsutil: prompt type roles to enter fsmo maintenance.

2. Fsmo maintenance: prompt type connections to enter server connections.

3. Server connections: prompt, type connect to server domain controller, where 
Domain controller is the name of the domain controller to which you are going to transfer the role

4. Server connections: prompt, type quit to enter fsmo maintenance.

5. Fsmo maintenance: prompt, type seize domain naming master.

After you have Seize the role, type quit to exit NTDSUtil.

For Infrastructure Master Role:

Go to cmd prompt and type ntdsutil

1. Ntdsutil: prompt type roles to enter fsmo maintenance.

2. Fsmo maintenance: prompt type connections to enter server connections.

3. Server connections: prompt, type connect to server domain controller, where 
Domain controller is the name of the domain controller to which you are going to transfer the role

4. Server connections: prompt, type quit to enter fsmo maintenance.

5. Fsmo maintenance: prompt, type seize infrastructure master.

After you have Seize the role, type quit to exit NTDSUtil.

For RID Master Role:

Go to cmd prompt and type ntdsutil

1. Ntdsutil: prompt type roles to enter fsmo maintenance.

2. Fsmo maintenance: prompt type connections to enter server connections.

3. Server connections: prompt, type connect to server domain controller, where 
Domain controller is the name of the domain controller to which you are going to transfer the role

4. Server connections: prompt, type quit to enter fsmo maintenance.

5. Fsmo maintenance: prompt, type seize RID master.

After you have Seize the role, type quit to exit NTDSUtil.

For PDC Emulator Role:

Go to cmd prompt and type ntdsutil

1. Ntdsutil: prompt type roles to enter fsmo maintenance.

2. Fsmo maintenance: prompt type connections to enter server connections.

3. Server connections: prompt, type connect to server domain controller, where 
Domain controller is the name of the domain controller to which you are going to transfer the role

4. Server connections: prompt, type quit to enter fsmo maintenance.

5. Fsmo maintenance: prompt, type seize PDC.

After you have Seize the role, type quit to exit NTDSUtil.

Posted via email from IT Rockstar

LINUX – Adding New Disks, Creating New Partitions

LINUX – Adding New Disks, Creating New Partitions

I have been working with Ubuntu as my preferred workstation Linux distribution.
Although I do not have a bare metal Linux system with technology provided by VMware Fusion and an infinite amount of SSH clients it is as good as the real thing.  The Linux fans will still be pleased that I do run everything on Snow Leopard and Virtual Machines.

One of the most common administrative tasks is adding a new hard disk into Linux and one of the most diverse things about Linux is that you can find a dozen ways to do the same thing.  With that said I am ruling out any GUI tools in response to having a procedure which can be done with remote hands and share similarities to server administration.

Getting Started

This tutorial assumes that the new physical hard drive has been installed on the system and is visible to the operating system. The best way to do this is to enter the system BIOS setup during the boot process and ensure that the BIOS sees the disk drive. Sometimes the BIOS will provide a menu option to scan for new drives. If the BIOS does not see the disk drive double check the connectors and jumper settings (if any) on the drive.


Finding the New Hard Drive in Ubuntu

Assuming the drive is visible to the BIOS it should automatically be detected by the operating system. Typically, the disk drives in a system are assigned device names beginning hd or sd followed by a letter to indicate the device number. For example, the first device might be /dev/sda, the second /dev/sdb and so on.

The following is output from a system with only one physical disk drive:

ls /dev/sd*

/dev/sda  /dev/sda1  /dev/sda2  /dev/sda5

This shows that the disk drive represented by /dev/sda is itself divided into three partitions, represented by /dev/sda1/dev/sda2 and /dev/sda5.

The following output is from the same system after a second hard disk drive has been installed and detected by the operating system:

ls /dev/sd*

/dev/sda   /dev/sda1  /dev/sda2 /dev/sda5 /dev/sdb

As shown above, the new hard drive has been assigned to the device file /dev/sdb. At this point the drive has no partitions shown (because we have yet to create any).


Creating Linux Partitions

The next step is to create one or more Linux partitions on the new disk drive. This is achieved using the fdisk utility which takes as a command-line argument the device to be partitioned (in this case /dev/sdb):

sudo fdisk /dev/sdb

[sudo] password for johndoe:

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel with disk identifier 0xc2fe324b.

Changes will remain in memory only, until you decide to write them.

After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help):

In order to view the current partitions on the disk enter the p command:

Command (m for help): p

Disk /dev/sdb: 2147 MB, 2147483648 bytes

255 heads, 63 sectors/track, 261 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0xc2fe324b

   Device Boot      Start         End      Blocks   Id  System

As we can see from the above fdisk output, the disk currently has no partitions because it is a previously unused disk. The next step is to create a new partition on the disk, a task which is performed by entering n (for new partition) and p (for primary partition):

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4):

In this example we only plan to create one partition which will be partition 1. Next we need to specify where the partition will begin and end. Since this is the first partition we need it to start at cylinder 1 and since we want to use the entire disk we specify the last cylinder as the end. Note that if you wish to create multiple partitions you can specify the size of each partition by cylinders, bytes, kilobytes or megabytes.

Partition number (1-4): 1

First cylinder (1-261, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):

Using default value 261

Now that we have specified the partition we need to write it to the disk using the w command:

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

If we now look at the devices again we will see that the new partition is visible as /dev/sdb1:

ls /dev/sd*

/dev/sda  /dev/sda1  /dev/sda2  /dev/sda5 /dev/sdb  /dev/sdb1

Now that the disk has been successfully partitioned, the next step is to create a file system on our new partition.


Creating a Filesystem on an Ubuntu Disk Partition

We now have a new disk installed, it is visible to Ubuntu and we have configured a Linux partition on the disk. The next step is to create a Linux file system on the partition so that the operating system can use it to store files and data. The easiest way to create a file system on a partition is to use the mkfs.ext3utility which takes as arguments the label and the partition device:

sudo mkfs.ext3 -L /photos /dev/sdb1

mke2fs 1.40.2 (12-Jul-2007)

Filesystem label=/photos

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

262144 inodes, 524112 blocks

26205 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=536870912

16 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

        32768, 98304, 163840, 229376, 294912

Writing inode tables: done                           

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.


Mounting a Filesystem

Now that we have created a new file system on the Linux partition of our new disk drive we need to mount it so that it is accessible. In order to do this we need to create a mount point. A mount point is simply a directory into which the file system will be mounted. For the purposes of this example we will create a/photos directory to match our file system label (although it is not necessary that these values match):

sudo mkdir /photos

The file system may then be manually mounted using the mount command:

sudo mount /dev/sdb1 /photos

Running the mount command with no arguments shows us all currently mounted file systems (including our new file system):

mount

/dev/sda1 on / type ext3 (rw,errors=remount-ro)

proc on /proc type proc (rw,noexec,nosuid,nodev)

/sys on /sys type sysfs (rw,noexec,nosuid,nodev)

varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755)

varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777)

udev on /dev type tmpfs (rw,mode=0755)

devshm on /dev/shm type tmpfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

lrm on /lib/modules/2.6.22-14-generic/volatile type tmpfs (rw)

securityfs on /sys/kernel/security type securityfs (rw)

/dev/sdb1 on /photos type ext3 (rw)


Configuring Ubuntu to Automatically Mount a Filesystem

In order to set up the system so that the new file system is automatically mounted at boot time, an entry needs to be added to the /etc/fstab file. This may be edited by issuing the following command in a terminal window:

sudo vi /etc/fstab

The following example shows an /etc/fstab file configured to automount our /photos partition:

# /etc/fstab: static file system information.

#

# <file system> <mount point>   <type>  <options>       <dump>  <pass>

proc            /proc           proc    defaults        0       0

# /dev/sda1

UUID=4a621e4d-8c8b-4b39-8934-98ab8aa52ebc /               ext3    defaults,errors=remount-ro 0       1

# /dev/sda5

UUID=9c82bf09-c6f7-4042-8927-34e46518b224 none            swap    sw              0       0

/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto,exec 0       0

/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec 0       0

/dev/sdb1       /photos         auto    defaults        0       0

Credits and copyright on technical research go to Techotopia.com, 2009.

Posted via email from IT Rockstar

MSSQL Recovery

The MSSQL Challenge

Today I submitted a challenge for our MSSQL DBA’s in house, while I cannot disclose all of the information for confidentiality I would like to share some of it.  Take part in my challenge and I would welcome you to a free lunch in New York.

So it starts off with a little story as it relates to MSSQL services.

When closing deals we have live meetings in person or conference calls where executive decision makers and engineers will meet and discuss the game plan for managed hosting solutions.  During this point experts are challenged and exercise their ideas to the best of their ability figuring out every possible case scenario before making a decision to move forward.

The most important role of an engineer during this meeting is to have a full understanding of the proposal an understanding of the client’s needs and lastly to be as accurate as possible with the correct answers to questions.  It is in the best interest of both sides to ask the most challenging cases and most of these usually are in relation to security or disaster recovery.

The worst response is to give the wrong answer is the second worst would be to respond with “Let me research that and I’ll get back to you”.

So back to my original point, the MSSQL Challenge.
Keep in mind that when on a conference or live meeting the response times would not be so gracious as they are in an email.

Question #1:  In a Microsoft MSSQL Cluster if and when a primary node fails for reasons such as loss of power, system crash or any other unexpected failure what happens to my transactions at the point of failure? Are they written to disk? When are they replicated, how often are they replicated, is it replicated down to the second, real time? What is the guarantee?

Question #2: Can you tell me what Roll Forward and Roll Back means in regards to transactions in a cluster initiating failover?
Why would I want enterprise edition over standard if enterprise rolls forward brings online then rolls back?

Key Notes — With SQL 2000 the process of failover in a cluster took several minutes depending on how much data was in the transaction log to be rolled forward and back.

With SQL 2005 & 2008 Enterprise Edition, the startup is much faster because they roll forward any completed transactions, bring the database online and then roll back any completed transactions.

With the Standard Edition of SQL 2005 & 2008 the failover process brings the database online after the transactions have been rolled backward and forward.

Posted via email from IT Rockstar

Data Migration Services

Data Migration Services

I have been working with CSV files importing massive amounts of data between two customer relations management suites.  The two I am working with are Microsoft CRM 4.0 and Salesforce CRM.

A great idea for a project I recently completed was finding a way to export all new leads generated in SF over a seven day period and import them into CRM.

I created a CRM virtual machine inside of a vSphere cloud and this machine acted as a domain controller, database server, web server and application server.
For the specific project I did not want CRM to modify the existing production active directory, in order to bypass this I created a .LOCAL domain for the installation and created a trust for users authenticating whom would use the application.

The performance was not a worry either given the fact direct access would be limited to handful of users who were going to process a specific job on the application, not to mention the fact the actual host of the vSphere cloud is operating on Intel Nehalem processors with lavish amounts of disks available for I/O intensive operations on RAID10 volumes and enough RAM to minimize the impact of swap.

This project was completed with 3 easy steps.

Step 1 – Exporting Data, from within SF generate a report to a CSV file containing the fields needed within the import process.

Step 2 – Create a VBS which will remove commas within double quotations;

Source:   http://blogs.technet.com/heyscriptingguy/archive/2008/06/16/how-can-i-remove-specified-commas-from-a-comma-separated-values-file.aspx

Step 3 – Import CRM Data, Microsoft Data Migration Wizard, Mapping fields to new custom fields.

I hope that this is a viable solution for anyone who encounters a similar task and can also be a great reference for migrations to other applications and services as well.

Posted via email from IT Rockstar

Microsoft Security Essentials

Microsoft Security Essentials provides real-time protection for your home PC that guards against viruses, spyware, and other malicious software.  This is a free download from Microsoft that is simple to install, easy to use, and always kept up to date so you can be assured your PC is protected by the latest technology.

Up until now, many of us may still be using 3rd party applications to provide a complete malware/spyware/virus suite solution.  However I highly recommend taking a look at MSE as a new successor to your workstation needs.

MSE will install on any Windows XP SP2+, Vista or Windows 7 operating system.

Recently I was speaking with a friend who had a compromised machine, infected with spyware and viruses.  The success rate of MSE is now at 100% with a full recovery of their system and no loss of data.

Here is a screenshot from my virtual machines installation.

In an enterprise or server environment I recommend ESET’s NOD32 http://www.eset.com/.

Posted via email from IT Rockstar

MySpace SQL Server

Happy New Year, 2010

I was reading the latest SQL Server Magazine and there is a great feature cover story on MySpace in regards to the infrastructure with tons of great information in details regarding the database.

I have always known that MySpace operated using Microsoft’s SQL Server as the DB engine of choice.
The fact of the matter is having such a vastly popular well known social media site running SQL Server demonstrates the success of operating a Microsoft solution.

I may be partial to Microsoft solutions given the fact I am a certified professional for their products and services including SQL Services and Exchange Mail, this is a great case study I enjoyed reading and sharing with others who may be looking to grow their next emerging empire.

MySpace team is often asked “Why SQL Server?” and the choice to go with a Windows platform.
Compared to open-source competitors it’s really easy to get up and running and develop rapidly with Microsoft solutions.

When the site started in 2003 they had one instance of SQL Server 2000 running on one server.
The next step in growth was the approach to scaling with a master/slave model using transactional replication. 
“Service Broker has enabled us to reduce data errors across our distributed databases by orders of magnitude. This is significant because data errors used to be the greatest problem our group had to deal with.
Christa Stelzmuller

Chief Data Architect, MySpace


Benefits
MySpace gained the enhanced data integrity and better user experience it had sought by using the Service Broker feature of SQL Server 2005 in creating its Service Dispatcher application to support its distributed database infrastructure. The company found Service Broker to have enterprise-class performance, and its internal developers are enjoying faster development by relying on Service Broker to handle asynchronous messaging
between the database instances.

There is a full case study available from Microsoft @ http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000004532
In addition by visiting SQL Server Magazine, request InstantDoc ID 103058 or subscribing to the magazine.
http://www.sqlmag.com/Articles/ArticleID/103058/103058.html

Posted via email from IT Rockstar

Migration Services

Throughout my experiences I have always worked on building my strongest customer relations through migration services.  How important are migration services?

Hosting companies and consultants alike may offer a broad range of migration services however there may be a level of uncertainty or confidence going forward.   I have worked with partners and competitors alike whose migration services have ranged from site visits to the customer site or the hosting facility as well as complete remote satellite migrations with multiple participants across the globe.

The most critical understand in a migration is that the reputation in committing to do so means everything.

          How much time will it take?

          How much does it cost?

          Security?

When it’s done should never be stated, an approximation should be as close as possible and in any event a discovery process in order to have a more accurate response to this question before hand.
This should also be where it is discussed whether the migration is to take place over a single business day an entire weekend or over the course of several nights during off peak hours depending on the nature of the business that is being migrated.

The cost factor should also tie in things such as if there are on site system administrators or a team of engineers as well as if there are any other hosting providers migrating off of that will be involved in this project.   If there are multiple parties involved then there should be a agreement as to what the conditions are financially to secure this cost, will it be billed hourly where there may be idling waiting for a third party.

Security is always crucial to maintain the trust and integrity of information.  If the migrate attendee is in an industry with compliance regulations don’t just take a word on it.  Security is not just a service it is a pervasive concern for everything that needs to be done.

I’d also like to add transparency.  In several industries such as SaaS providers where uptime and running is 100% of the business there may be a need for such a high level of commitment that everything in regards to a migration needs to be completely transparent.  This can be done with the right hands and a level of experience in doing so.   Building things in parallel and maintaining a production and staging environment while transitionally replicating information is a start where this would converge.

Contact me or come visit my New York office and I would love to discuss in more details.  For all others feel free to use my information if it provides any expert guidance.

Posted via email from IT Rockstar

Brocade and fabric-based servers

For product planners at Dell, HP, IBM, Juniper and Sun, the talk about fabric-based servers moved from philosophical discussion to urgent-must-take-action status when Cisco introduced the Unified Computing System. 

The presence of UCS from the networking giant is tangible evidence that a large piece of the future server value proposition will be intellectual property related to the network embedded in the server.

Blade server vendors pioneered embedded server networks by integrating adapter cards and switches from networking vendors. If Cisco is right, successful server vendors in the future will separate themselves from their competitors with their own sophisticated networking technology that is needed to connect virtual servers, networks and storage residing in environments ranging from a single chassis to part of a public cloud that stretches around the globe.

None of the aforementioned server and networking vendors, including HP with the acquisition of 3Com, has introduced their own FCoE intellectual property with which to compete against Cisco. I believe this is why all of these vendors still covet the Brocade portfolio that spans Ethernet and Fibre Channel, servers to fabric, and from core to edge.

Source: Frank Berry @ networkcomputing.com

Posted via email from IT Rockstar

Amazon Kindle DX HOHOHO Happy Holidays

Amazon Kindle DX HOHOHO Happy Holidays

Santa never sleeps and in the spirit of giving Logicworks has their own Santa.
Mr. Carter Burden has continued the tradition of gifting us with pure awesomeness.
This year all Logicworks employees have received Amazon Kindle DX.

Some of the main features to begin with you will notice this massive
9.7” screen, full QWERTY keyboard and 3.3GB of user-accessible space.
PDF support is a nice addition.  The Amazon Whispernet wireless is another
great feature.  Put on your reading glasses and enjoy, this bad boy is going
with me on my next vacation.

Posted via email from IT Rockstar