Tag Archive : PowerShell

/ PowerShell

Here's an interesting issue you may run into when migration from Exchange 2003 to Exchange 2010.   E-mails passing through the Exchange 2010 Hub Transport role may bounce back with a Non-Delivery Report (NDR) with the SMTP code 550 5.7.1 that says something like "Submission has been disabled for this account."

You would normally only see this NDR when a user's mailbox has exceeded the ProhibitSendQuota and/or ProhibitSendReceiveQuota limit (i.e. their mailbox is full).  However, there's a potential "bug" here.   In this case, you will get the NDR even if the mailbox is using the database default limits and is NOT full.  If you bring up the account properties in Active Directory Users and Computers (ADUC) on a system with the Exchange 2003 tools loaded, everything looks fine on the account.    However, if you lookup the account in PowerShell with the Exchange 2010 tools loaded, you'll see that the ProhibitSendQuota and/or ProhibitSendReceiveQuota has a value (often 0KB, but it can be any value lower than the default and still cause this problem) even with the UseDataBaseQuotaDefaults set to "True".

For some reason, the Exchange 2010 Hub Transport seems to ignore the UseDatabaseQuotaDefaults = True flag and will reject messages based on the ProhibitSendQuota and/or ProhibitSendReceiveQuota limits.

If you have a large environment, you'll probably want to find and fix all accounts with this issue right away.  It won't be practical to wait for someone to report that they've had this problem.   Luckily PowerShell makes it easy for us.

 

To find all users across your entire AD forest that will run into this problem, run this command:

Get-Mailbox -IgnoreDefaultScope -ResultSize Unlimited -Filter { UseDatabaseQuotaDefaults -eq $true -and ProhibitSendQuota -ne "unlimited" -or UseDatabaseQuotaDefaults -eq $true -and ProhibitSendReceiveQuota -ne "unlimited" } | Select Name,UserPrincipalName,Database,ServerName,UseDatabaseQuotaDefaults,ProhibitSendQuota,ProhibitSendReceiveQuota | Export-CSV -Path c:\scripts\badquotas.csv

To FIX the problem for all users across your entire AD forest, run this command:

Get-Mailbox -IgnoreDefaultScope -ResultSize Unlimited -Filter { UseDatabaseQuotaDefaults -eq $true -and ProhibitSendQuota -ne "unlimited" -or UseDatabaseQuotaDefaults -eq $true -and ProhibitSendReceiveQuota -ne "unlimited" } | Set-Mailbox -ProhibitSendQuota unlimited -ProhibitSendReceiveQuota unlimited

In my most recent run-in with this issue, I found just under 2,000 accounts impacted by it.  With about 60,000 mailboxes total, that's only about 3% affected.  That being said, it's still much quicker to let PowerShell do all the hard work for you.  Even with 60,000 mailboxes spread out across 4 domains in the forest, these PowerShell commands took less than 5 seconds to complete.

Last week I showed you a script that I wrote to create a mass amount of databases and database copies on an Exchange 2010 Database Availability Group (DAG).   Since that script was a work in progress, I needed to test it again.  This meant I needed to get rid of most of the databases it created the first time I ran it.  To do this, I used the same .csv file to create the databases, and just removed the databases from the list that I didn't want deleted.   Then I ran this script I called rmdb.ps1:

# Remove Databases
# By Josh M. Bryant
# www.fixtheexchange.com
#
$data = Import-CSV C:\Scripts\dbcreate\exdbs.csv
$Servers = Get-MailboxServer | Where {$_.DatabaseAvailabilityGroup -ne $null}
ForEach ($line in $data)
{
$dbname = $line.DBName
    ForEach($Server in $Servers)
    {
    Remove-MailboxDatabaseCopy -Identity $dbname\$Server -Confirm:$False
    }
Remove-MailboxDatabase -Identity $dbname -Confirm:$false
}


 

On an Exchange 2010 DAG, you have to delete all copies of the database BEFORE it will allow you to delete the database.  The GUI will only allow you to do this one at a time, so if you've got a large number of databases that need deleting, this script is a real time saver.  Database copies and databases are deleted much faster than they're created.

A few weeks ago, I showed you my solution to creating a mass amount of disks with PowerShell and diskpart, for future use as Exchange 2010 database and log drives.   I finally had time to go back and create the databases for this environment.   In this environment, I had 4 Exchange 2010 servers with the Mailbox (MB) role on them, all part of the same Database Availability Group (DAG).   I needed to create a total of 97 databases, and 376 database copies in this DAG.   To do this, I wrote the following script I called "dbcreate.ps1"


# Exchange 2010 Database Creation Script
# By Josh M. Bryant
# Last updated 10/18/2011 11:00 AM
#
#
# Specify database root path.
$dbroot = "E:\EXCH10"
#
# Specify log file root path.
$logroot = "L:\EXCH10"
#
# Specify CSV file containing database/log paths and database names.
$data = Import-CSV C:\Scripts\dbcreate\exdbs.csv
#
# Get list of mailbox servers that are members of a DAG.
$Servers = Get-MailboxServer | Where {$_.DatabaseAvailabilityGroup -ne $null}
#
# Specify Lagged Copy Server identifier.
$lci = "MBL"
#
# Specify ReplayLagTime in fromat Days.Hours:Minutes:Seconds
$ReplayLagTime = "14.0:00:00"
#
#Specify TruncationLagTime in format Days.Hours:Minutes:Seconds
$TruncationLagTime = "0.1:00:00"
#
# Specify RpcClientAccessServer name.
$RPC = "mail.domain.com"
#
#
#
# Create databases.
ForEach ($line in $data)
{
$dbpath = $line.DBPath
$dbname = $line.DBName
$logpath = $line.LogPath
New-MailboxDatabase -Name $dbname -Server $line.Server -EdbFilePath $dbroot\$dbpath\$dbname.edb -LogFolderPath $logroot\$logpath
}
#
# Mount all databases.
Get-MailboxDatabase | Mount-Database
Start-Sleep -s 60
#
# Create Database Copies.
ForEach ($line in $data)
{
ForEach ($Server in $Servers)
    {
    If ($Server -notlike $line.Server)
        {
        Add-MailboxDatabaseCopy -Identity $line.DBName -MailboxServer $Server
        }
    }
}
#
# Setup lagged copies.
ForEach ($Server in $Servers)
{
If ($Server -like "*$lci*")
    {
    Get-MailboxDatabaseCopyStatus -Server $Server | Set-MailboxDatabaseCopy -ReplayLagTime $ReplayLagTime -TruncationLagTime $TruncationLagTime
    }
}
#
# Set RpcClientAccess Server and enable Circular Logging on all databases.
Get-MailboxServer | Get-MailboxDatabase | Set-MailboxDatabase –RpcClientAccessServer $RPC -CircularLoggingEnabled $true


The exdbs.csv file reference in the script, contained these 4 columns: "Server,DBPath,DBName,LogPath".

The script first creates the 94 databases, 47 of them on one server, 47 on another.  It then creaties copies of all these databases across all servers in the DAG.  2 of the servers are for lagged copies, so it goes and sets those based on server naming convention.  The final set is to set the RpcClientAccessServer to the FQDN of the CAS Array on all databases. UPDATE: I have the script setting all the databases for circular logging at the very end now too.

This is still a work in progress, so use at your own risk.  As always please leave author information at the top of the script intact if you use it, and don't forget to link back to this site if you share it anywhere else.

The script worked great, despite having a few "RPC endpoint mapper" errors.  It got the databases 95% setup.  One server had a service not running on it, so the database copies were in the "Suspended" state on it.  Simple use of the "Resume-MailboxDatabaseCopy" command easily correct this issue.   I also had to go back and specify the Activation Preference, which was easy to do based on the naming convention, running the Get-MailboxDatabaseCopyStatus command against each server and piping it into the Set-MailboxDatabaseCopy -ActivationPreference command.

UPDATE: I made some changes to the script, and no longer saw any errors during database creation.  I also fixed the syntax for the database and log paths so they get created in the correct location.  

The end result was 367 fully functional and ready to use database copies.  Even with the minor clean-up after running the script, it made everything a lot easier.  Creating this many database copies would have taken quite some time if done manually.

 

I had an interesting request at work today.  Management wanted all employees with a company issued cell phone, to have that cell phone number show up in the "Mobile" field of the "Phone/Notes" tab in Outlook.  Due to international legal concerns, it was only to be employees in the US.  

Obviously this information is stored in Active Directory (AD), and Exchange presents it to Outlook via the Global Address List (GAL), so to accomplish this task, I'd need to update the appropriate field in AD.  My first thought was "This should be easy to do with PowerShell, IF someone has a list of names and mobile phone numbers.".   I was in luck, and received a list with almost 7000 names and mobile phone numbers, for all US Employees.  I did a little formatting in Excel, all names in Column A, all Numbers in the appropriate format in Column B, and saved it as a .csv file.

Of course I wasn't going to get off that easy.  The users were spread across multiple domains within our AD forest.  The Set-QADuser command from the QAD Tools snap-in gave an error saying it couldn't resolve a directory object for the given identity.  The command defaults to the domain the account your running the script from is logged into, so the script was giving me this error for every user that was not in the same domain.   To get around this, I had to add a few lines to the script, and use the Get-QADuser command first, then pipe it into the Set-QADuser command.

Here is the final import-mpn.ps1 script:

# Mobile Phone Number Import Script
# By Josh M. Bryant
# https://www.fixtheexchange.com
# Last updated 9/29/11
# Requires Quest ActiveRoles Management Shell for Active Directory (AKA QAD Tools) from: http://www.quest.com/powershell/activeroles-server.aspx
#
# Load QAD tools.
Add-PSSnapin Quest.ActiveRoles.ADManagement
#
# Specificy path to .csv file mapping account name and mobile phone number.
$data = Import-CSV C:\Scripts\cellnumbers.csv
#
# Specify path to log file.
$log = "C:\Scripts\import-mpn.log"
#
# Specify FQDN(s) to search.
$Domains = @(
"domain.com"
"child1.domain.com"
"child2.domain.com"
"child3.domain.com"
)
# Assign Mobile Phone Number to account from .csv
ForEach ($line in $data)
{
$name = $line.Name
$number = $line.Number
$Domains | % {Get-QADUser -Service $_ -Identity $name | Set-QADUser -MobilePhone $number | Out-File -FilePath $log -Append}
}

 

With almost 7000 accounts to process, it took quite some time for the script to complete, but it got the job done. 

As always, if you use or repost this script anywhere, please keep the author information at the top in place, and remember to link back to this site, thanks!

Last week I showed you how I used PowerShell and diskpart to partition, format, and label 193 disks. If you’ll recall, I said I still needed to figure out how to get 188 of those disks mapped to folders for Volume Mount Points.

My goal was to get this 100% automated. Due to time constraints, I’m only at about 95%. This last bit takes a small amount of manual work. Don’t worry, it’s just simple copy and paste.

First we need to list out all volumes, to do this, open PowerShell, and run diskpart.

diskpart
list volume
exit

Next we just open Excel, put “Volume” in cell A1, and “Label” in cell B1. Copy “Volume ###” from the diskpart output and paste it into column A of the Excel sheet. Copy “Label” from the diskpart output and paste it into column B. Remove any disks that already have drive letters assigned. If you remember from last week, I had 5 disks that I assigned drive letters. After removing them from this list, I had 188 rows (not including the column names in row 1). Save it as a .csv file

Next we run a PowerShell script I called cvmp.ps1:

# Volume Mount Point Creation Script
# By Josh M. Bryant
# www.fixtheexchange.com
# Last modified 9/6/2011 8:30 AM
#
$dmount = "E:EXCH10"
$lmount = "L:EXCH10"
$data = Import-CSV C:Scriptsvolumes.csv
ForEach ($line in $data)
{
$volume = $line.Volume
$label = $line.Label
Add-Content -path C:Scriptsvolumes.txt -value "Select $volume"
If ($label -like "*L*")
{
Add-Content -path C:Scriptsvolumes.txt -value "assign mount=$lmount$label"
}
Else
{
Add-Content -path C:Scriptsvolumes.txt -value "assign mount=$dmount$label"
}
}
diskpart /s C:Scriptsvolumes.txt

That’s it! The PowerShell uses the .csv file to create the diskpart script that assigns the volume mount points.

I would eventually like to go back and script out the .csv file creation using the Export-CSV command. I was having trouble getting the output from the diskpart command (which is not a PowerShell command) into a usable format with the Export-CSV command.

Even with the manual copy and paste work, this saved a ton of time. Can you imagine how long all of this would have taken without scripting?

If you use scripts from this site, please keep the author info intact, and include a link back to this site if you post them anywhere else.

Today at work I had to partition, format, and label, 193 disks, in Server 2008 R2. Each disk was a LUN on an EMC SAN. The server is an HP ProLiant BL460c G7 Server Blade. This will be an Exchange 2010 mailbox server that is part of a Database Availability Group (DAG) when it is finished, so 94 of these disks are for databases and 94 are for transaction logs.

5 of the disks received drive letters, the remaining 188 are to be volume mount points. There was no way I was going to do any of this manually, so I created some PowerShell scripts that use the diskpart command to do all the work for me. The result is having 5 disks partitioned, formatted, labeled and drive letters assigned. Then 188 folders get created for use as the volume mount points. The remaining 188 disks get partitioned, formatted, and labeled. I’m still trying to work out a way to script mapping the volume mount points to the folders, so this is a work in progress. (UPDATED: Click here to see how I solved this problem.)

First we have the primarydrives.txt script for diskpart to create the main drives that actually have letters assigned to them:

select disk 3
create partition primary NOERR
format FS=NTFS LABEL="SAN Exchange" UNIT=64K QUICK NOWAIT NOERR
assign letter=D NOERR
select disk 4
create partition primary NOERR
format FS=NTFS LABEL="SAN Temp" UNIT=64K QUICK NOWAIT NOERR
assign letter=T NOERR
select disk 196
create partition primary NOERR
format FS=NTFS LABEL="SAN Tracking Logs" UNIT=64K QUICK NOWAIT NOERR
assign letter=M NOERR
select disk 5
create partition primary NOERR
format FS=NTFS LABEL="SAN Databases" UNIT=64K QUICK NOWAIT NOERR
assign letter=E NOERR
select disk 6
create partition primary NOERR
format FS=NTFS LABEL="SAN Exchange" UNIT=64K QUICK NOWAIT NOERR
assign letter=L NOERR

Next we have the fvmpcreate.ps1 script that creates folders based on names in a text file. I had a spreadsheet with what the database names are going to be, so I just copied those to the text file to use for this script. This script also writes a text file for use by diskpart to partition, format, and label each disk. Like I said, I’m still working on how to get it to map the volume mount points to the folders created by the script.

# Folder and Volume Mount Point Creation Script
# By Josh M. Bryant
# www.fixtheexchange.com
# Last Updated 9/2/2011 3:40 PM
#
$dbfile = "C:Scriptsdbdrives.txt"
$logfile = "C:Scriptslogdrives.txt"
$dbdata = Get-Content C:Scriptsdbnames.txt
$ldata = Get-Content C:Scriptslognames.txt
$dbpath = "E:EXCH10"
$logpath = "L:EXCH10"
$drive = 6
#
#Create Database Folders and Volume Mount Points
#
ForEach ($line in $dbdata)
{$drive = $drive + 1
New-Item $dbpath$line -type directory
add-content -path $dbfile -value "select disk $drive"
add-content -path $dbfile -value "create partition primary NOERR"
add-content -path $dbfile -value "format FS=NTFS LABEL=`"$line`" UNIT=64K QUICK NOWAIT NOERR"
}
#
#Create Log Folders and Volume Mount Points
#
ForEach ($line in $ldata)
{$drive = $drive + 1
New-Item $logpath$line -type directory
add-content -path $logfile -value "select disk $drive"
add-content -path $logfile -value "create partition primary NOERR"
add-content -path $logfile -value "format FS=NTFS LABEL=`"$line`" UNIT=64K QUICK NOWAIT NOERR"
}

The last script I called createdrives.ps1, this is the master script that calls all the others.

# Exchange Drive Creation Script
# By Josh M. Bryant
# www.fixtheexchange.com
# Last Updated 9/2/2011 3:55 PM
#
# Create primary drives.
diskpart /s C:Scriptsprimarydrives.txt > primarydrives.log
# Wait for disks to format.
sleep 30
# Create EXCH10 Folders
New-Item E:EXCH10 -type directory
New-Item L:EXCH10 -type directory
# Create Folders and Diskpart Scripts
& "C:Scriptsfvmpcreate.ps1"
# Create Volume Mount Points for Databases
diskpart /s C:Scriptsdbdrives.txt > dbdrives.log
# Create Volume Mount Points for Logs
diskpart /s C:Scriptslogdrives.txt > logdrives.log

This ended up being a huge time saver. Everything completed in about 2 minutes. Even if I can’t figure out how to script out the volume mount point mapping, it will have saved me a tremendous amount of time. The best part is this is scalable, so I can easily adapt it for use on other servers, regardless of the number of disks that need to be configured.

UPDATE: Click here to see how I solved the volume mount point creation.

If you use these scripts, or want to re-post them anywhere else, please keep the author information at the top of the script, and include a link back to this site. Thanks!