Monday, December 21, 2015

Using Powershell to Manipulate TFS Get Latest, Check In, and Merge

In our environment, changes made in the Test branch have to travel through the Main branch and into the Release branch to be deployed into production. Sometimes changes need to move through quickly without regard to other changes, especially in an environment where there may be a single coder. Note: the following code will merge all checked in code regardless of who checked it in. Be careful in multi-coder environments.

While PowerShell is being used to write this process, many of the commands below are actual command line directives. These older, mature commands have more features than the native PowerShell TFS commands and are documented more extensively.

The first line in the PowerShell script should be the Set-Location command. This will set the scripts working location to the location of the TFS workspace. In my instance it is “C:\Development”.

set-location c:\development

Next, the comment that I want attached to the “Check in” of the changed files to each branch.

$Comment = """123456"""

In order to keep from typing the full path of the TF.exe on each Merge and Checkin, a variable is used to hold the file location.

$CommandLocation = "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\TF.exe "

Since there are multiple branches that need to be merged and checked out, I’m hard coding the GET command for each branch. Basically, it gets the latest version of each file (not including your changes) from each branch. This is done to keep merge confilcts to a minimum.

$GetDevCommand = " get $/<TeamProjectName>/<DevBranch>/<Directory> /recursive /force "
$GetMainCommand = " get $/<TeamProjectName>/<MainBranch>/<Directory> /recursive /force "
$GetReleaseCommand = " get $/<TeamProjectName>/<ReleaseBranch>/<Directory> /recursive /force "

<TeamProjectName> is the name of the Team Project you are working in under the Team Collection.

<xBranch> is either Dev, Main, or Release in this instance.

<Directory> is the name of a directory (and possible subdirectories) below the branch. This can be as granular as needed in that <directory/subdirectory/subdirectory> can be specified. The /recursive flag recurses through all subdirectories under the <Directory> and /force causes them to be overwritten.

The same hard coding occurs in the Merge and Checkin commands below

$MergeToMain = " merge $/<TeamProjectName>/<DevBranch>/<Directory> $/<TeamProjectName>/<MainBranch>/<Directory>/recursive  /version:T"
$CheckinToMain = " checkin /comment:$comment /recursive /noprompt ""c:\development\<TeamProjectName>/<MainBranch>/<Directory>"""

$MergeToRelease = " merge $/<TeamProjectName>/<MainBranch>/<Directory> $/<TeamProjectName>/<ReleaseBranch>/<Directory> /recursive  /version:T"
$CheckinToRelease = " checkin /comment:$comment /recursive /noprompt ""c:\development\<TeamProjectName>/<ReleaseBranch>/<Directory>"""

Once you enter the TeamProjectName, Branch, and Directory, the work of moving files in TFS can begin. This is accomplished with the Start-Process command.

By using the –wait and –RedirectStandardOutput flags, each command executes sequentially and redirects the output from each command to its own txt file for troubleshooting later.

#Get Lastest Files From Sources Control
Start-Process "$CommandLocation" "$GetDevCommand" -wait -RedirectStandardOutput c:\temp\Get-TFS.txt
Start-Process "$CommandLocation" "$GetMainCommand" -wait -RedirectStandardOutput c:\temp\Get-TFS.txt
Start-Process "$CommandLocation" "$GetReleaseCommand" -wait -RedirectStandardOutput c:\temp\Get-TFS.txt

#Merge and Checkin to Main
Start-Process "$CommandLocation" "$MergeToMain" -nonewwindow -wait -RedirectStandardOutput c:\temp\MergePreProd-TFS.txt
Start-Process "$CommandLocation" "$CheckinToMain" -nonewwindow -wait -RedirectStandardOutput c:\temp\CheckinPreProd-TFS.txt

#Merge and Checkin to Release
Start-Process "$CommandLocation" "$MergeToRelease" -nonewwindow -wait -RedirectStandardOutput c:\temp\MergeProd-TFS.txt
Start-Process "$CommandLocation" "$CheckinToRelease" -nonewwindow -wait -RedirectStandardOutput c:\temp\CheckinProd-TFS.txt

In this instance the output for each command is directed to a file the in the C:\Temp directory for viewing later.



Thursday, December 17, 2015

SQL Reporting Services 2014 Save, Refresh, and Print Buttons on Separate Lines in IE

In SSRS 2014 there is a bug in rendering pages in Internet Explorer where the Save, Refresh, and Print buttons will all show on different lines in IE.

Below is a screenshot of an report rendering on an unpatched server.

By moving to build 12.00.4422 (CU2 for SQL Server 2014 Service Pack 1) the problem is corrected.


Thursday, December 3, 2015

Quickly Flip Database to Mirroring Partner

While Database Mirroring is certainly not a new technology, I’ve only started using it for HA on some of our older SQL instances. In doing so, I’ve found the need to quickly (and manually) flip the databases using mirroring to the mirrored instance. While you can accomplish this slowly using the GUI, if you have more than 1 or 2 databases to flip it can get old very quickly.

With that in mind, I’ve created a script where the output of the script can be used to quickly flip the databases that meet certain criteria. In this case, any databases that are currently SYNCHRONIZED and are the PRINCIPAL in the mirroring session will be selected.

   1: SELECT 'ALTER DATABASE [' + DB_NAME(database_mirroring.database_id) + '] SET PARTNER FAILOVER' AS 'Script'
   2:        ,DB_NAME(database_mirroring.database_id) as 'DB Name'
   3:        ,database_mirroring.mirroring_partner_name AS 'Mirroring Failover Partner'
   4: FROM    sys.database_mirroring
   5:  WHERE database_mirroring.mirroring_state_desc = 'SYNCHRONIZED'
   6:    AND database_mirroring.mirroring_role_desc = 'PRINCIPAL'

The meat of the script is in column 1. By running the commands generated in column 1 in another window with the same connection, you can quickly flip the databases to the mirrored servers. Columns 2 and 3 are there for extra information, namely the name of the database to be flipped and the name of the mirrored server that will serve primary instance of the database.


Wednesday, October 14, 2015

Using SQLCMD in SSMS to quickly check sync jobs

For some of us, an unfortunate side effect of using 3rd party applications to run different segments of your business, is the need to sync information between databases. Whether it be orders, employees, or something else, data needs to be synced between these systems to allow for each business segment to run.

As a DBA, running down issues in these sync processes can be very difficult with one SSMS query window looking in each system, constantly switching back and forth between results sets to look for differences.

Enter SQLCMD mode in SSMS.

Using SQLCMD mode inside your query allows you to open up a connection to a server, query a database and return a result from as many different servers as needed all within the same window.

In this example, I’ve created an Employee database on 3 different instances of SQL Server. Inside those DB’s, there is an Employee tables that hold all of my employees. At the top of the query, I’ve created a variable called “Emp” and set it to “Gail Huff” so it can be used across all the connections.

NOTE: In order to run this query, you must turn on SQLCMD mode by going to Query –> SQLCMD mode

   1: :SETVAR Emp "Gail Huff"
   5: USE EmployeeDB2008R2
   6: GO
   7: SELECT 'EmployeeDB2008R2' AS DB,* FROM Employee WHERE Employee.Name = '$(Emp)'
   8: GO
  12: USE EmployeeDB2012
  13: GO
  14: SELECT 'Employee2012' AS DB,* FROM Employee WHERE Employee.Name = '$(Emp)'
  15: GO
  19: USE EmployeeDB2014
  20: GO
  21: SELECT 'Employee2014' AS DB,* FROM Employee WHERE Employee.Name = '$(Emp)'
  22: GO

Note the SETVAR and CONNECT keywords. Both do exactly what they sound like and allow SQLCMD mode to use variables and quickly connect to different SQL servers. After running the query, my result set is as follows and you can see that Gail has the same status across all servers.

Happy querying!

Monday, September 28, 2015

Intermittent Network Connectivity Failures in Windows 2012 on VMWare with VMXNET3 Driver

Over the past several weeks we’ve had issues with some servers sporadically dropping off the network and quickly regaining connection. Network connectivity was checked and tests were ran but nothing was found. One commonality among the servers is that each is running in VMWare and use the vmxnet3 driver as the main NIC driver. After reading a couple of articles about potential issues surrounding this driver we opened up perfmon to check for discarded packets and found that it was discarding them at an alarming rate.


After reading the following article:

Large packet loss at the guest OS level on the VMXNET3 vNIC in ESXi

we made the prescribed changes and haven’t experienced issues since.

Friday, September 25, 2015

Visual Studio Solution is Missing or Can’t Add Another Project to Solution

If you’ve ever opened up a Visual Studio Project and were left wondering where the solution file is, specifically because you need to add another project to that solution, the answer lies in the IDE options.

By navigating to Tools – Options –Projects and Solutions – General, you’ll find the “Always show solution” check box. After selecting that, the top level solution will appear in the Solution Explorer and you’ll be able to add another project to it.

Thursday, September 24, 2015

Re-add Grouping to Tablix After Deleting it in SSRS

If you’ve ever accidentally deleted the default row grouping on a tablix in SSRS, you have two options to get it back.

  1. Recreate the tablix. (Which isn’t fun if you’ve have several columns or lots of formatting)
  2. Recreate the grouping that’s done by default during the wizard. (You used the wizard, didn’t you?)

Rather than starting from scratch with a new tablix, I going to show how to get the default grouping back up and running.

The steps are as follows:

  1. Right click on the detail line that needs the grouping re-added.
  2. Select “Add Group … Adjacent Above…”
  3. Then select the first column in the detail line as the “Group By”
  4. After the grouping is created, cut and paste each of the columns to the newly create line above it. This will keep the formatting and formulas intact. After all the columns have been moved, you can delete the old detail line.

Thursday, September 3, 2015

Sending Availability Group replication traffic through a dedicated network.

We recently set up a SQL 2012 HADR solution that took advantage of Availability Groups (AG’s). In this instance, we utilized a multi-subnet cluster to allow for the primary replica to live in our primary data center, while having the secondary replica in another data center.

Note: In this setup, only asynchronous replicas are allowed.

On each of the servers in the cluster, we have 3 network connections.

Public: All user traffic will flow through this connection

Private: For internal cluster communication

Replication: All AG traffic will be routed through this

After those dedicated connections are setup, database mirroring endpoints are used to receive connections other instances in the AG. You can read more about database endpoints here. Usually, when the endpoints are created in an AG, the following script is executed without specifying a specific IP to use.





Without specifying an IP that the endpoint is listening on, the listener will accept a connection on any valid IP. But, if we want to make sure our AG and user traffic are segregated, we must specify the IP of the Replication network connection. So the script would look like so:

CREATE ENDPOINT [Hadr_endpoint]
    AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ( --<-- Your Replication IP here

CREATE ENDPOINT [Hadr_endpoint]
    AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ( --<-- Your Replication IP here

You can reference the BOL article for creating an endpoint here

NOTE: Equally important when using the CREATE AVAILABILITY GROUP statement is to specify the replication IP in the ENDPOINT_URL section. By default, when scripting the AG, the FQDN will show in that section.

FOR       DATABASE MyDB1, MyDB2   
'COMPUTER01' WITH         
(        ENDPOINT_URL = 'TCP://',
         FAILOVER_MODE = MANUAL          ),      
'COMPUTER02' WITH         
(        ENDPOINT_URL = 'TCP://',
         FAILOVER_MODE = MANUAL          );

You can read up on the CREATE AVAILABILITY GROUP statement here.

After this done, and you’ve completed the rest of the AG setup, replication traffic will be routed through your Replication network. A quick and easy test of this is to open up Windows Task Manager and watch the traffic on the three different Ethernet connections.

Note the adapter name for each connection.

To test this open up an SSMS connection to the AG listener and send a large amount of DML statements to a database within the AG. If you setup the endpoint with the default LISTENER_IP = ALL, you’ll most likely see a high amount of send and receive traffic on Public interface. But, if you’ve setup the endpoint to listen on the Replication IP, you’ll see a large amount of receive traffic on the Public interface (DML) and a high amount of send traffic on the Replication interface (sending the changes to the secondary replica).

Thursday, August 27, 2015

Nodes are not consistently configured with IPv4 and/or IPv6 addresses on network adapters that are usable by the cluster.

During cluster validation of a multi-subnet SQL Server 2012 Availability Group cluster, the validation report kept returning the following warning:

"Nodes are not consistently configured with IPv4 and/or IPv6 addresses on network adapters
that are usable by the cluster."

Below that entry was as follows:

Node <Server1> configured with IP addresses from protocol IPv4
Node <Server2> is configured with IP addresses from protocol IPv4 and IPv6

After comparing ipconfig /all and device manager (show hidden devices) entries on both nodes, Server2 included 4 Microsoft ISATAP Adapters, while Server1 didn't.

Shown here as already disabled
After a couple hours of research and verification that we aren't using IPv6 on our network, I disabled the ISATAP devices inside the device manager and the cluster was able to validate properly.