Tag Archive : Exchange 2010

/ Exchange 2010

Apple has done it again.  Yet another iOS release is causing major issues when a device (iPhone, iPad, iPod, etc) running it connects to Exchange via ActiveSync.  Microsoft and Apple are working together to resolve this issue.  In the past this has usually resulted in Apple releasing a patch for iOS to resolve the issue.   Until a fix is released, Microsoft has published some workarounds here: http://support.microsoft.com/kb/2814847

If you have any iOS devices connecting to Exchange in your organization at all, you'll want to pay attention to this one.

   UPDATE:     Apple has released a support article for this issue as well: http://support.apple.com/kb/TS4532

Today I had my 1st "Phone Advisory" with a large customer wanting some guidance on upgrading to Service Pack 2 for Exchange 2010.   It was an excellent discussion, and even though SP2 has been out for about 6 months now, there are still many organizations of all sizes that haven't installed it yet.   If you haven't performed this upgrade yet, here are a few things to help you through it.

 

First I'd recommend reviewing this TechNet article, it covers most of what you need to know.

 

One of the biggest things about this upgrade, is that it includes a schema update.   Setup will automatically attempt to run the schema update, as long as the account your running it under is a member of both the  "Schema Admins" and "Enterprise Admins" groups.  In many larger organizations, it may not be possible for you as an Exchange Admin to perform the schema update.   Luckily the schema update can be performed separately, however it must be done first or the Service Pack will not install.   If you are unable to perform the schema update yourself, run it through your organization's change management process, and assist whoever has the rights to perform it.   More information on how to run schema updates for Exchange can be found here.   If you need to know exactly what is changed in the schema, consult the Exchange Server Active Directory Schema Changes Reference.

 

Once your schema update is complete, there are some prerequisites on the Operating System that need to be met before you can install the Exchange Service Pack, including either OS Service Packs, hotfixes, or both.   For more information on the prerequsistes, check out this TechNet article.   For Service Pack 2, there are some new prerequisites that apply only to servers with the CAS role installed.  The SP2 Release notes contain a couple PowerShell commands that you can copy and paste to make installation of these new prerequisites easy.

 

The customer I worked with today had all 5 of the Exchange roles running on their own dedicated servers, which makes the upgrade process pretty straightforward, as outlined in the TechNet article I referenced earlier.   You start with your Client Access Servers.  If a hardware load balancing solution is being used, this process can be done without interrupting service to your end users.   After your all your CASs have been upgraded, move on to your Hub Transport, then your Unified Messaging (if you have them), and finally your Mailbox servers.    If your Mailbox servers are part of a DAG, there are some additional considerations, these are outlined in the aforementioned article as well.

 

If you have Multi-Role servers, you'll want to take a look at this Blog post from the Exchange Team on how to patch them.

 

 

So what are some "Gotchas?" when installing SP2?

 

When you run Setup, don't forget to run it with administrator level permissions (right click Run-As Administrator).

 

If you have the Unified Messaging Role (I've found this to be rather uncommon), AND you've installed additional language packs, you need to uninstall the language pack son your UM server(s) prior to installing the Service Pack.  After the Service Pack has been installed, you can install the SP2 versions of any language packs that are needed.

 

If you have any Group Policies that define PowerShell Execution policies, it is important to make sure that the MachinePolicy and UserPolicy are set to "undefined".   If they are not, the install may fail.   To check your Execution Policy, run the command: Get-ExecutionPolicy -list     For more information see KB2668686.

 

The version of the Management tools must match the version of Exchange installed on the server you want to manage.   If you have the EMC installed on a Windows Vista or Windows 7 worksation, don't forget to update them, if you don't, they won't be able to connect to your servers until you do.

 

 

One important question I was asked today, was "is there a way to roll this update back?", and the answer is NO!   Once the schema changes have been made, you can't undo them.  Once a Service Pack has been installed on an Exchange server, you can't uninstall the service pack.  Attempting to uninstall the service pack will result in Exchange being removed from the server.

 

 

If you haven't installed a Service Pack on Exchange 2010 before, I hope this article will help you in your endeavor, best of luck!

 

Here's an interesting issue you may run into when migration from Exchange 2003 to Exchange 2010.   E-mails passing through the Exchange 2010 Hub Transport role may bounce back with a Non-Delivery Report (NDR) with the SMTP code 550 5.7.1 that says something like "Submission has been disabled for this account."

You would normally only see this NDR when a user's mailbox has exceeded the ProhibitSendQuota and/or ProhibitSendReceiveQuota limit (i.e. their mailbox is full).  However, there's a potential "bug" here.   In this case, you will get the NDR even if the mailbox is using the database default limits and is NOT full.  If you bring up the account properties in Active Directory Users and Computers (ADUC) on a system with the Exchange 2003 tools loaded, everything looks fine on the account.    However, if you lookup the account in PowerShell with the Exchange 2010 tools loaded, you'll see that the ProhibitSendQuota and/or ProhibitSendReceiveQuota has a value (often 0KB, but it can be any value lower than the default and still cause this problem) even with the UseDataBaseQuotaDefaults set to "True".

For some reason, the Exchange 2010 Hub Transport seems to ignore the UseDatabaseQuotaDefaults = True flag and will reject messages based on the ProhibitSendQuota and/or ProhibitSendReceiveQuota limits.

If you have a large environment, you'll probably want to find and fix all accounts with this issue right away.  It won't be practical to wait for someone to report that they've had this problem.   Luckily PowerShell makes it easy for us.

 

To find all users across your entire AD forest that will run into this problem, run this command:

Get-Mailbox -IgnoreDefaultScope -ResultSize Unlimited -Filter { UseDatabaseQuotaDefaults -eq $true -and ProhibitSendQuota -ne "unlimited" -or UseDatabaseQuotaDefaults -eq $true -and ProhibitSendReceiveQuota -ne "unlimited" } | Select Name,UserPrincipalName,Database,ServerName,UseDatabaseQuotaDefaults,ProhibitSendQuota,ProhibitSendReceiveQuota | Export-CSV -Path c:\scripts\badquotas.csv

To FIX the problem for all users across your entire AD forest, run this command:

Get-Mailbox -IgnoreDefaultScope -ResultSize Unlimited -Filter { UseDatabaseQuotaDefaults -eq $true -and ProhibitSendQuota -ne "unlimited" -or UseDatabaseQuotaDefaults -eq $true -and ProhibitSendReceiveQuota -ne "unlimited" } | Set-Mailbox -ProhibitSendQuota unlimited -ProhibitSendReceiveQuota unlimited

In my most recent run-in with this issue, I found just under 2,000 accounts impacted by it.  With about 60,000 mailboxes total, that's only about 3% affected.  That being said, it's still much quicker to let PowerShell do all the hard work for you.  Even with 60,000 mailboxes spread out across 4 domains in the forest, these PowerShell commands took less than 5 seconds to complete.

Service Pack 2 for Exchange 2010 was released today!

 

Check out the new features here: http://technet.microsoft.com/en-us/library/hh529924.aspx

 

Download it here: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=28190

 

Or read the announcement from the Exchange Team here: http://blogs.technet.com/b/exchange/archive/2011/12/05/released-exchange-server-2010-sp2.aspx

 

Alex Fontana over on the VMware blogs posted a great article on choosing between virtual disks (VMDK files) and Raw-device Mappings (RDM) when virtualizing an Exchange server.   Check it out here: http://blogs.vmware.com/apps/2011/11/virtualized-exchange-storage-vmdk-or-rdm-or.html

I have deployed virtualized Exchange servers on VMDK files in ESXi 4.1 environments without any issues.   I've also used the "In-guest iSCSI" method he talks about in the article, which is one of my personal favorites.  The In-guest iSCSI method is what I use for Exchange servers that reside in my budget lab along with the FreeNAS based Virtual SAN I demonstrated in that series.

For production environments, I've stuck with either RDM, or the In-guest ISCSI, depending on the SAN architecture.  I'll definitely consider VMDK from here on out.

Last week I showed you a script that I wrote to create a mass amount of databases and database copies on an Exchange 2010 Database Availability Group (DAG).   Since that script was a work in progress, I needed to test it again.  This meant I needed to get rid of most of the databases it created the first time I ran it.  To do this, I used the same .csv file to create the databases, and just removed the databases from the list that I didn't want deleted.   Then I ran this script I called rmdb.ps1:

# Remove Databases
# By Josh M. Bryant
# www.fixtheexchange.com
#
$data = Import-CSV C:\Scripts\dbcreate\exdbs.csv
$Servers = Get-MailboxServer | Where {$_.DatabaseAvailabilityGroup -ne $null}
ForEach ($line in $data)
{
$dbname = $line.DBName
    ForEach($Server in $Servers)
    {
    Remove-MailboxDatabaseCopy -Identity $dbname\$Server -Confirm:$False
    }
Remove-MailboxDatabase -Identity $dbname -Confirm:$false
}


 

On an Exchange 2010 DAG, you have to delete all copies of the database BEFORE it will allow you to delete the database.  The GUI will only allow you to do this one at a time, so if you've got a large number of databases that need deleting, this script is a real time saver.  Database copies and databases are deleted much faster than they're created.

A few weeks ago, I showed you my solution to creating a mass amount of disks with PowerShell and diskpart, for future use as Exchange 2010 database and log drives.   I finally had time to go back and create the databases for this environment.   In this environment, I had 4 Exchange 2010 servers with the Mailbox (MB) role on them, all part of the same Database Availability Group (DAG).   I needed to create a total of 97 databases, and 376 database copies in this DAG.   To do this, I wrote the following script I called "dbcreate.ps1"


# Exchange 2010 Database Creation Script
# By Josh M. Bryant
# Last updated 10/18/2011 11:00 AM
#
#
# Specify database root path.
$dbroot = "E:\EXCH10"
#
# Specify log file root path.
$logroot = "L:\EXCH10"
#
# Specify CSV file containing database/log paths and database names.
$data = Import-CSV C:\Scripts\dbcreate\exdbs.csv
#
# Get list of mailbox servers that are members of a DAG.
$Servers = Get-MailboxServer | Where {$_.DatabaseAvailabilityGroup -ne $null}
#
# Specify Lagged Copy Server identifier.
$lci = "MBL"
#
# Specify ReplayLagTime in fromat Days.Hours:Minutes:Seconds
$ReplayLagTime = "14.0:00:00"
#
#Specify TruncationLagTime in format Days.Hours:Minutes:Seconds
$TruncationLagTime = "0.1:00:00"
#
# Specify RpcClientAccessServer name.
$RPC = "mail.domain.com"
#
#
#
# Create databases.
ForEach ($line in $data)
{
$dbpath = $line.DBPath
$dbname = $line.DBName
$logpath = $line.LogPath
New-MailboxDatabase -Name $dbname -Server $line.Server -EdbFilePath $dbroot\$dbpath\$dbname.edb -LogFolderPath $logroot\$logpath
}
#
# Mount all databases.
Get-MailboxDatabase | Mount-Database
Start-Sleep -s 60
#
# Create Database Copies.
ForEach ($line in $data)
{
ForEach ($Server in $Servers)
    {
    If ($Server -notlike $line.Server)
        {
        Add-MailboxDatabaseCopy -Identity $line.DBName -MailboxServer $Server
        }
    }
}
#
# Setup lagged copies.
ForEach ($Server in $Servers)
{
If ($Server -like "*$lci*")
    {
    Get-MailboxDatabaseCopyStatus -Server $Server | Set-MailboxDatabaseCopy -ReplayLagTime $ReplayLagTime -TruncationLagTime $TruncationLagTime
    }
}
#
# Set RpcClientAccess Server and enable Circular Logging on all databases.
Get-MailboxServer | Get-MailboxDatabase | Set-MailboxDatabase –RpcClientAccessServer $RPC -CircularLoggingEnabled $true


The exdbs.csv file reference in the script, contained these 4 columns: "Server,DBPath,DBName,LogPath".

The script first creates the 94 databases, 47 of them on one server, 47 on another.  It then creaties copies of all these databases across all servers in the DAG.  2 of the servers are for lagged copies, so it goes and sets those based on server naming convention.  The final set is to set the RpcClientAccessServer to the FQDN of the CAS Array on all databases. UPDATE: I have the script setting all the databases for circular logging at the very end now too.

This is still a work in progress, so use at your own risk.  As always please leave author information at the top of the script intact if you use it, and don't forget to link back to this site if you share it anywhere else.

The script worked great, despite having a few "RPC endpoint mapper" errors.  It got the databases 95% setup.  One server had a service not running on it, so the database copies were in the "Suspended" state on it.  Simple use of the "Resume-MailboxDatabaseCopy" command easily correct this issue.   I also had to go back and specify the Activation Preference, which was easy to do based on the naming convention, running the Get-MailboxDatabaseCopyStatus command against each server and piping it into the Set-MailboxDatabaseCopy -ActivationPreference command.

UPDATE: I made some changes to the script, and no longer saw any errors during database creation.  I also fixed the syntax for the database and log paths so they get created in the correct location.  

The end result was 367 fully functional and ready to use database copies.  Even with the minor clean-up after running the script, it made everything a lot easier.  Creating this many database copies would have taken quite some time if done manually.

 

Today at work I had to partition, format, and label, 193 disks, in Server 2008 R2. Each disk was a LUN on an EMC SAN. The server is an HP ProLiant BL460c G7 Server Blade. This will be an Exchange 2010 mailbox server that is part of a Database Availability Group (DAG) when it is finished, so 94 of these disks are for databases and 94 are for transaction logs.

5 of the disks received drive letters, the remaining 188 are to be volume mount points. There was no way I was going to do any of this manually, so I created some PowerShell scripts that use the diskpart command to do all the work for me. The result is having 5 disks partitioned, formatted, labeled and drive letters assigned. Then 188 folders get created for use as the volume mount points. The remaining 188 disks get partitioned, formatted, and labeled. I’m still trying to work out a way to script mapping the volume mount points to the folders, so this is a work in progress. (UPDATED: Click here to see how I solved this problem.)

First we have the primarydrives.txt script for diskpart to create the main drives that actually have letters assigned to them:

select disk 3
create partition primary NOERR
format FS=NTFS LABEL="SAN Exchange" UNIT=64K QUICK NOWAIT NOERR
assign letter=D NOERR
select disk 4
create partition primary NOERR
format FS=NTFS LABEL="SAN Temp" UNIT=64K QUICK NOWAIT NOERR
assign letter=T NOERR
select disk 196
create partition primary NOERR
format FS=NTFS LABEL="SAN Tracking Logs" UNIT=64K QUICK NOWAIT NOERR
assign letter=M NOERR
select disk 5
create partition primary NOERR
format FS=NTFS LABEL="SAN Databases" UNIT=64K QUICK NOWAIT NOERR
assign letter=E NOERR
select disk 6
create partition primary NOERR
format FS=NTFS LABEL="SAN Exchange" UNIT=64K QUICK NOWAIT NOERR
assign letter=L NOERR

Next we have the fvmpcreate.ps1 script that creates folders based on names in a text file. I had a spreadsheet with what the database names are going to be, so I just copied those to the text file to use for this script. This script also writes a text file for use by diskpart to partition, format, and label each disk. Like I said, I’m still working on how to get it to map the volume mount points to the folders created by the script.

# Folder and Volume Mount Point Creation Script
# By Josh M. Bryant
# www.fixtheexchange.com
# Last Updated 9/2/2011 3:40 PM
#
$dbfile = "C:Scriptsdbdrives.txt"
$logfile = "C:Scriptslogdrives.txt"
$dbdata = Get-Content C:Scriptsdbnames.txt
$ldata = Get-Content C:Scriptslognames.txt
$dbpath = "E:EXCH10"
$logpath = "L:EXCH10"
$drive = 6
#
#Create Database Folders and Volume Mount Points
#
ForEach ($line in $dbdata)
{$drive = $drive + 1
New-Item $dbpath$line -type directory
add-content -path $dbfile -value "select disk $drive"
add-content -path $dbfile -value "create partition primary NOERR"
add-content -path $dbfile -value "format FS=NTFS LABEL=`"$line`" UNIT=64K QUICK NOWAIT NOERR"
}
#
#Create Log Folders and Volume Mount Points
#
ForEach ($line in $ldata)
{$drive = $drive + 1
New-Item $logpath$line -type directory
add-content -path $logfile -value "select disk $drive"
add-content -path $logfile -value "create partition primary NOERR"
add-content -path $logfile -value "format FS=NTFS LABEL=`"$line`" UNIT=64K QUICK NOWAIT NOERR"
}

The last script I called createdrives.ps1, this is the master script that calls all the others.

# Exchange Drive Creation Script
# By Josh M. Bryant
# www.fixtheexchange.com
# Last Updated 9/2/2011 3:55 PM
#
# Create primary drives.
diskpart /s C:Scriptsprimarydrives.txt > primarydrives.log
# Wait for disks to format.
sleep 30
# Create EXCH10 Folders
New-Item E:EXCH10 -type directory
New-Item L:EXCH10 -type directory
# Create Folders and Diskpart Scripts
& "C:Scriptsfvmpcreate.ps1"
# Create Volume Mount Points for Databases
diskpart /s C:Scriptsdbdrives.txt > dbdrives.log
# Create Volume Mount Points for Logs
diskpart /s C:Scriptslogdrives.txt > logdrives.log

This ended up being a huge time saver. Everything completed in about 2 minutes. Even if I can’t figure out how to script out the volume mount point mapping, it will have saved me a tremendous amount of time. The best part is this is scalable, so I can easily adapt it for use on other servers, regardless of the number of disks that need to be configured.

UPDATE: Click here to see how I solved the volume mount point creation.

If you use these scripts, or want to re-post them anywhere else, please keep the author information at the top of the script, and include a link back to this site. Thanks!