Extracting information from a Cisco router config with Powershell

August 27, 2011 Leave a comment

Why this script?

Information about systems in a local network is often distributed over several devices/sources. These sources are not always all up to date. After something (or a lot of things) changed in your network, you might find yourself facing the task of bringing all these devices/sources to a consistent configuration state.

From the outside, your gateway router or firewall is the ‘entry point’ to your local network. So if if something changed in your network, it is a good point to start checking network configuration consistency by looking at the gateway router. This script was written to help with this task.

What it does

The script will read a Cisco router config file and extract some interesting bits of it by applying regular expression pattern matching to each line. This kind of ‘lazy parsing’ used here is far from complete. My main goal was to get information about hosts (represented in the router config by IP addresses) having ports opened individually for them, e.g. smtp, www, imap, etc. For that reason I don’t handle config statements which open ports for a address ranges of (internal) destination addresses. Also, in my environment we use ‘static NAT’ for some hosts (we’re moving away from using it), so the script extracts information about the mapping between private (internal) and official (external) IP addresses as well. Some general information about the router itself is also processed (router hostname, name servers in use, interfaces and assigned IP addresses, …).

The information extracted from the config file is transformed into an internal XML representation. After processing the script simply writes the XML representation to a file. Although extracting this information might already be helpful in itself, it would be overkill to use Powershell and XML only for this basic task. A few simple grep commands might have been enough for that as well. Representing the extracted information using XML makes two things easier. First, you can relate bits of information from different locations in the router config file (‘on what interface was ACL 110 used again?’). This isn’t so easy if you just grep against the config file. Second, storing the extracted info in an XML file allows easy further processing done by additional scripts. These will be part of a blog post yet to come. Just to give you an idea about what will follow later:

  • The information will be augmented by doing reverse DNS lookups (to get the host names for the ‘naked’ IP addresses).
  • Also, pinging the IP addresses will (usually) show us whether the systems are still alive.
  • In case of IP addresses representing Windows systems, using WMI might get us even more information about a host which was originally represented in the router config only by a meager IP address.

You might get your XML to start from cheaper…

If you have a Cisco router running with a newer version of IOS (XR), you might be able to directly save its configuration to an XML file. Then you wouldn’t need this script at all. But you might still be interested in the upcoming posts about augmenting or analyzing router information.

Design rationale of the XML representation used

General structure

If you look at the XML file generated by this script, its structure might at first look overly complicated. On the top level there is a twofold distinction. One branch contains information about the router itself (hostname, name servers, interface configuration, …), all under the top level node <my_config>. The other branch – starting with <systems> – is listing items from the config which are about (internal) systems known to the router. This information is again quite deeply structured. Why not use a simple representation of open ports per internal system, which would be along the lines of ‘port X is open for destination IP Y’ or – in XML – <open_port src_ip=”…” dst_ip=”…” port=”…”/>? Why use a far more complex representation which first postulates the existence of a <system>, having an <interface>, which is assigned an <ip>, for which finally the router has a something to say about open ports or static NAT entries? The reasons for this complex representation are extensibility and reusability. The information about systems extracted from a router configuration is very rudimentary and can be quite useless if not augmented by additional information from other sources. One example: Cisco router configurations are all about IP addresses. Some network administrators might know every system accessible from the outside just by looking at its IP address, but then again not all will do so. For that reason it makes sense to augment the extracted information by doing a reverse DNS lookup later on. And if we later on have to augment the information we got from a router anyway, why not start with a representation of internal systems which is right from the start designed for being easily augmented?

Avoiding the abundant use of attributes on XML nodes

An early unpublished version of this script was encoding a lot of information into XML node attributes – like shown in the XML fragment given in the previous section. The reason for using attributes in the first place was that this results in a very compact encoding of information. If every bit of information for an XML node is encoded into attributes, you won’t even need an explicit closing tag – ‘/>’ will do. But just using attributes has several drawbacks. First, you can’t assign multiple values to an attribute – at least not without giving attribute values some internal structure like <host ip=”a.b.c.d;e.f.g.h”/>. And doing something like that would only mess things up completely. Second, imagine you have to join information about systems originating from two different sources A and B. From every source you have generated a separate XML file which contains the information the source has about internal systems. Now you would like to merge the XML files into one more complete representation. Merging would be straight forward if the systems in question would be disjunct between the two sources. Unfortunately this won’t happen, so you have some information about a system X from source A and other bits of information from source B. Joining these bits automatically is possible if both sources include a common item for a system, e.g. an IP address or a host name. The actual joining can be done easily with tools available for XML if you just have to copy the child nodes from an XML node for a system in source A to the equivalent system node in source B. But if you make heavy use of attributes, the same task suddenly gets very difficult since there is no easy way to copy all attributes from one XML node to another. If you do know one, please tell me.

The Script

# scan-cisco-config.ps1
# Scan a configuration file of a cisco router and extract some general information
# about which ports are open for which IP addresses.
# Extracts some general router config info as well.
# The extracted information is saved as XML to enable further analysis and reuse.
# (c) Marcus Schommler, 2011

# default value for host name (used until one is read from the config file):
$hostname = "cisco-router"

$ip_pat = "([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)"
$proto_pat ="(tcp|udp|ip|icmp|gre|pim|esp)"

# create a stub xml node set from a string:
$xcfg = [xml] "<root><my_config><system/></my_config><systems/></root>"

# read a saved cisco config file to process:
$cisco_cfg = @()
$cisco_cfg = Get-Content .\cisco-bo-config.txt

# iterate over the lines read:
foreach ($cl in $cisco_cfg) {
	if ($cl.length -lt 3) {
		# line too short to be of interest
		continue
	}
	if ($cl -match "ip nat inside source static $ip_pat $ip_pat") {
		# processing static nat entries
		$curr_itf = $null
		
		# add a new host element to the xml document we're bulding:
		$h = $xcfg.CreateElement("system")
		$sxml = "<interface><ip>" + $Matches[1] + "</ip>"
		$sxml += "<nat_ip src='$hostname'>" + $Matches[2] + "</nat_ip></interface>" 
		$h.InnerXml = $sxml
		
		# Since the straight forward '$xcfg.root.hosts.AppendChild($h)' doesn't work (please ask MS why),
		# we're using an alternate syntax.
		# thanks to: http://www.terminal23.net/2007/09/powershell_nuance_with_appendc.html
		$xcfg.root["systems"].AppendChild($h)

	} elseif ($cl -match "access-list (\d+) permit $proto_pat (.*)") {
		# found acl permit entry, continue processing with the matching parts of the line:
		$curr_itf = $null
		$acl = $Matches[1]
		$proto = $Matches[2]
		$permit = $Matches[3]
		$add = $true
		$do_continue = $false
		$src_ip = $null
		$src_mask = $null
		if ($permit -match "any host(.*)") {
			$permit = $Matches[1]
			$do_continue = $true
		} elseif ($permit -match "host(.*)host(.*)") {
			$src_ip = $Matches[1]
			$permit = $Matches[2]
			$do_continue = $true
		} elseif ($permit -match "$ip_pat\s+$ip_pat\s+host(.*)") {
			$src_ip = $Matches[1]
			$src_mask = $Matches[2]
			$permit = $Matches[3]
			$do_continue = $true
		}
		if ($do_continue) {
			if ($permit -match "$ip_pat (eq|gt) (\w+)") {
				# single port or 'greater than'
				$dst_ip = $Matches[1]
				$oper = $Matches[2]
				$port = $Matches[3]
			} elseif ($permit -match "$ip_pat range (\w+)\s+(\w+)") {
				# a range of ports is open
				$dst_ip = $Matches[1]
				$oper="range"
				$port = $Matches[2] + "-" + $Matches[3]
			} elseif ($permit -match "$ip_pat`$") {
				$dst_ip = $Matches[1]
				$port = "all"
			} else {
				$add = $false
			}
		}
		if ($add) {
			# look up the ip in our xml host list:
			$h = $xcfg.SelectSingleNode("/root/systems/system/interface[nat_ip='$dst_ip']")
			$h2 = $xcfg.SelectSingleNode("/root/systems/system/interface[ip='$dst_ip']")
			if (($h -eq $null) -and ($h2 -eq $null)) {
				# a system entry was not added while parsing static nat entries
				# -> add one now:
				$h = $xcfg.CreateElement("system")
				$sxml = "<interface><ip>" + $dst_ip + "</ip></interface>"
				$h.InnerXml = $sxml
				$xcfg.root["systems"].AppendChild($h)
				$h = $h.SelectSingleNode("interface")
			} 
			$p = $xcfg.CreateElement("open_port")
			$p.SetAttribute("src", $hostname)
			$p.SetAttribute("acl", $acl)
			$p.SetAttribute("proto", $proto)
			if ($src_ip -ne $null) {
				$p.SetAttribute("src_ip", $src_ip.Trim())
			}
			if ($src_mask -ne $null) {
				$p.SetAttribute("src_mask", $src_mask.Trim())
			}
			$p.SetAttribute("op", $oper)
			$p.SetAttribute("port", $port)
			if ($h -ne $null) {
				$h.AppendChild($p)
			} else {
				$h2.AppendChild($p)
			}
		}
		
	} elseif ($cl -match "interface (.*)") {
		# located the beginning of an interface definition
		$curr_itf = $xcfg.CreateElement("interface")
		$curr_itf.SetAttribute("name", $matches[1])
		$xcfg.root.my_config["system"].AppendChild($curr_itf)
		
	} elseif ($cl -match "ip name-server\s+(.*)") {
		# located name server entry for the router
		$dns = $xcfg.CreateElement("name_server")
		$dns.InnerXml = "<ip>" + $matches[1] + "</ip>"
		$xcfg.root.my_config["system"].AppendChild($dns)
		
	} elseif ($cl -match "\s+description\s+(.*)") {
		if ($curr_itf -ne $null) { 
			# found description for an interface
			$curr_itf.SetAttribute("desc", $matches[1])			
		}
		
	} elseif ($cl -match "\s+ip access-group\s+(.*)") {
		if ($curr_itf -ne $null) { 
			$curr_itf.SetAttribute("acl", $matches[1])					
		}
		
	} elseif ($cl -match "\s+ip address\s+(\S+)\s+(\S+)") {
		# IP address for an interface
		if ($curr_itf -ne $null) { 
			$ip = $xcfg.CreateElement("ip")
			$ip.InnerText = $matches[1]
			$ip.SetAttribute("netmask", $matches[2])
			$curr_itf.AppendChild($ip)					
		}
		
	} elseif ($cl -match "\s*hostname\s+(.*)") {
		# extract configured host name for this router:
		$hostname = $matches[1]					
		$h = $xcfg.CreateElement("name")
		$h.InnerText = $hostname
		$xcfg.root.my_config["system"].AppendChild($h)
		
	} elseif ($cl -match "ip route") {
		# currently we're doing nothing with routing information,
		# just reset the current interface def:
		$curr_itf = $null
	}
}

# save the complete generated xml to a file:
$xcfg.Save(".\cisco-cfg.xml")

Read Site and Subnet Information from Active Directory, then save to an XML File

August 23, 2011 Leave a comment

In my series about collection information about a local network, this post and script will be just a short one. If you have a Windows domain (or more) distributed over several locations,  Active Directory stores information about site names and about which network mask is associated with which site. Having this information ready for further scripts might come handy later on, e.g. when looking at router information or when deciding which subnets to scan while looking for alive local systems. Since I’m in favor of storing all kind of information in XML files, here’s how to get all the information about sites and associated IP subnets from your Active Directory.

# get-ADSites.ps1
# (c) Marcus Schommler, 2011
# Read AD site names and subnets per site from the current AD forest and save this information to an XML file
# with thanks to: http://marcusoh.blogspot.com/2009/09/list-active-directory-subnets-with.html

# generate a rudimentary XML document from a string:
$xdoc = [xml] "<sites></sites>"
# get the node under which we will insert the site nodes:
$xsites = $xdoc.SelectSingleNode("sites")

# file name for saving site information:
$sites_fname = "sites.xml"

# iterate over the sites in the forest
$myForest = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()
foreach ($adsite in $myForest.Sites) {
	# new node for a site:
	$xsite = $xdoc.CreateElement("site")
	$null = $xsites.appendChild($xsite)
	# Instead of several calls to CreateElement and AppendChild,
	# we construct the content for the site node from strings:
	$snets = ""
	# iterate over the subnets for the site:
	foreach ($subn in $adsite.Subnets) {
		$snets += "<subnet>$subn</subnet>"
	}
	# now we simply assign a concatenation of several strings to the InnerXml property:
	$xsite.InnerXml = "<name>" + $adsite.Name + "</name>" + $snets
}

#save the site and subnet info to an XML file:
$xdoc.save($sites_fname)


Categories: powershell

Tackling the ‘whole pasta buffet’ mess of a network configuration – preamble to a series

August 18, 2011 Leave a comment

From spaghetti code to pasta buffet

If you have a programming background you might be familiar with the term ‘spaghetti code’ – depicting a program whose internal structure is so messed up that its source code reminds you of a plate of spaghetti. When managing a local computer network, configuration changes over time might lead to deterioration of a once clean structure in a way that the result resembles not only ‘spaghetti code’ but a whole ‘pasta buffet’ instead – after a bunch of hungry guests paid it a visit.

And that is definitely the situation I’m currently facing at work. How it came to this, you can read about in a minute. Since just describing a despicable situation is somewhat dull and helping nobody, I’ll make this review of my current situation the starting point of some blog posts about the way how I try to tackle this situation with the help of Powershell and probably some other tools as well. If you are the organized type and working for a large institution with a good-sized IT budget, you might have implemented some ITIL conforming processes for your network and a well stuffed CMDB in place as well. In that case,  you can stop reading here. If not, at the end of this series you might end up with a set of information that feels very much at home in a CMDB – or can be seen as a low-cost substitute for one.

Abbreviated long-term history of an institutional local network

To give you an idea how this ‘pasta buffet’ mess  in a local network can come into existence, consider this history of a LAN system for an organization over the last fifteen years:

  • You start with a network of about 60 workstations and a few servers – where none of the latter has to offer services to the outside world.
  • Since you’re lucky, your internet provider assigns you a full class C net of official IP addresses. So you just assign IP addresses from this pool directly to all your machines.
  • The world-wide web comes to your organization. Now you have a web server hosting the institutional website. Of course this has to be available from the outside.
  • After some strategic decisions your organization starts building a R&D department excelling in Web development. Suddenly you have to manage more front end servers, application servers, and database servers than you’ve ever imagined necessary.
  • Somebody tells you that exposing all internal systems to the internet via the use of official IP addresses is a bad idea. So you start using private IP addresses internally. For the systems which are accessible from the outside you decide to use static NAT entries on your gateway router. Since this doesn’t work well in all situations, some systems keep their directly assigned official IP address.
  • You suddenly realize that giving your servers official IPs by static NAT leads to problems when internal clients try to get access to them. Your remedy for this is a ‘split brain’ DNS configuration where host name resolution for internal clients gives them the private IP of a server system.
  • Merger time! Your organization does a merge with two others which were just partners before. Suddenly you have two more office locations and VPN tunnels and routing between all of them.
  • As a part of the merger, the local IT teams are merged too, now working together on networking problems and remedies for them. One of the first outcomes is a organization-wide plan for the use of new private IP address ranges. So you start to assign addresses from this new range to new computer systems.
  • The unification of formerly distinct local IT structures continues. You get a new Active Directory Domain and start to migrate your servers and client PCs to this new domain. Of course this includes configuring and using new DNS and DHCP servers as well.
  • Wait a minute: Some servers can’t be migrated just like that. To be able to continue using existing installations of Sharepoint and Exchange you keep the old domain working for quite some time.
  • Bad guys all over the web. Your humble gateway router with its old-fashioned access list based restrictions is no longer secure enough. You introduce a firewall appliance between your LAN and the internet. Your old gateway router is still in use for routing and some VPN tunneling. Your plan is to replace its functionality piece by piece with equivalent features offered by your shiny new appliance.
  • You learn that the best way to managed your servers visible to the outside world is to put them into a separate network segment called a ‘Demilitarized Zone’ (DMZ). For that you need a consecutive range of official IP addresses. Luckily your internet provider still has some left (this is about the time when the IPv4 pool finally drains) and assigns your 64 addresses. You start moving servers out of the inner LAN to the DMZ.
  • And finally: It’s moving time! Your founders have evaluated your institution and recommended that two of your locations in neighbouring cities should be merged into one. As usual, the hope is for synergistic effects to happen afterwards.
  • Rejoice: You’ll get a whole new, state of the art data center! The backside of this is that it doesn’t free you from moving your current server systems into this new data center. And of course, the systems from your second, soon to be former location have to be moved too. Now it’s really time to start planning how to tackle this mess of a grown network configuration…

How to get a grip on this situation

Well, well, well. We should have really always completely cleaned up everything directly after making configuration changes to our network. But that reflective thought is not helping at all, so what to do right now? Of course anyone in a similar situation should do quite some network and system configuration cleanup as soon as possible. But even if everything is neat and tidy, moving a data center still means that the configuration of most servers and other network components will definitely change. And of course the following general thoughts don’t apply only to this quite specific situation of a data center move and merge, so you might profit from reading them as well. Even if you only feel a bit ‘unwell’ about the current state of a local network you have to administer.

Get documentation

At the heart of planning for network configuration changes lies the need for current, up to date information. So no matter what you plan to do, make sure that you have all necessary information available in an easy to use format. What information you really need might depend on your plans, but for many purposes you’ll basically need at least a common set of information about your systems and network. So our first goal is to get up to date configuration information. And since we’re about to change configurations iteratively, it doesn’t make much sense to collect this information without applying some automation to the information gathering process. That brings us to the question which information sources are readily available for automated extraction. The list of possible sources partly depends on the size and type of your network. You might not have a firewall appliances or even not a single machine running Windows. But the following list includes a few items specific to a Microsoft-centric shop:

  • DNS, both forward and reverse lookup zones
  • Configuration files from switches, gateway routers, firewall appliances
  • Active Directory: Information in there ranges from locations and IP subnets to computer accounts
  • Polling devices directly using WMI or SNMP
  • Network monitoring systems (BigBrother, Nagios, WhatsUp, …)

Getting and combining information from these sources can be a demanding task. Some of the mentioned items describe only a broad category (e.g. “gateway routers”) so the abstract goal of “getting configuration information from a gateway router” might result in many slightly different implementations, always depending on the type of your specific router brand and model. Other items allow much more standardized querying. It doesn’t make a difference at all whether your DNS server is using Microsoft’s own implementation, a BIND server running on Windows or Linux, or something else completely. They all implement the same protocol to query against. The same holds true when using SNMP for querying devices. To a lesser extend using WMI is also an abstraction over different kinds of machines, but here you’re limited to the Windows world.

In which format should we document our network?

The important thing for being able to combine information from many different sources is to build an abstraction about what you might call your conceptual or application domain. For example, firewall configurations are always about systems, interfaces, IP-Addresses, and allowed or denied traffic. This list does not show what your firewall actually does for you, but nonetheless it is working on these items. For our limited purpose of getting general network configuration information, the differences between different models lie in the way how each one requires you to write down the rules and to which systems they apply. Generally speaking, the network devices from which we want to gather information all share the same conceptual domain of computers, devices, NICs, IP addresses, network masks, DNS resolution, etc. Unfortunately it’s our task to translate all their funny little dialects into a common and manageable form. Then, how do we choose a suitable format to translate into? Do we have to start from scratch, while adjusting and expanding the format as we go? Or is there a kind of ‘standardized system and network configuration documentation format’ available to build on? Actually, there is work going on in this area. But it is a groundlaying work in progress and getting into it when you ‘just want to describe a few systems in a network using XML’ can result in quite some overhead. To give you a short overview:

  • The IETF has a workgroup on a standard protocol for configuring network devices called NETCONF.
  • Since configuring network devices involved passing configuration data around, there are several (!) proposals for NETCONF Data Modeling Languages. You’ll find that the IETF workgroup for that runs under the name of NETMOD and that one of the proposed modeling languages is called YANG. Another one goes by the name of KALUA but YANG looks more ‘mainstream’ at the moment – if you can apply this term to a proposal.
  • If you want to use XML you’re then told that there is always an exact mapping from YANG to an XML encoding. This is called YIN and can be seen as a subset of NETCONF XML.
  • The technical documents about YANG currently only contain elementary building blocks for data modeling applied to network devices. RFC 6021 tells you about “Common YANG data types” and contains definitions for concepts like counters, object-identifiers, timestamps, physical and MAC addresses, IP-addresses, domain names, hosts, and the like. Definitely all very important but there is still quite a gap from that to describing all currently relevant aspects of a computer system or a router.
  • For describing items like routers or computers you have to write a YANG Data Model. The NETMOD workgroup web page currently lists several papers in draft status about Data Models for the areas of (general) System Management and the configuration of IP, SNMP, and Routing. The ‘oldest’ of these documents is dated March 2011 so this is really work in progress. Still you might get some ideas from these drafts how best to describe network devices. For example, the System Management draft gives a basic definition of the entity ‘system’.

At the moment it looks like we’re quite a bit too early to fully take advantage of established standards for describing network devices. Just to give you one example: You start to collect all this important information about your network devices using this shiny new standards. Wouldn’t it be nice to later reuse this information by importing it into a CMDB? But at the time of this writing I wasn’t able to find any CMDB system being able to import device configurations given in YANG. On the other hand NETCONF already has the backing from manufacturers like Cisco and Juniper, so this doesn’t look at all like a dead-end. So I opt for a pragmatical use of soon-to-be-standards: Have a look at them whenever making decisions about how to encode configuration data. Then make your own XML compatible to the draft. But while doing so, keep your learning and encoding overhead low.

Define your goals, decide what to do

Depending on your individual situation, the changes to plan for your network and systems configuration vary a lot. This planning is always a demanding intellectual process for which any current configuration information can only be a basis. So we don’t attempt to automate the planning process itself, we just attempt to assist this process as good as it seems possible. But developing these assisting tools must still leave us enough time to do the planning itself. To say it the other way round: It’s really good to have a neat and complete documentation about your network. But finishing the tools for getting this documentation ready one day before moving the data center is definitely ‘too late to satisfy’.

Don’t stop with generating reports, generate ‘action templates’ as well

If you have a strong vision about the ‘goal state’ for your network and you have all this nice documentation about its current state, do you really want to write down and execute by hand all the changes necessary for this transformation? Why not generate ‘action templates’ from the information about the current state? But be careful: Don’t execute the changes directly as you generate them. Write them to a file and review them intellectually one by one. You might even want to run some automated tests after every little or larger reconfiguration.

Check outcomes and side effects

Applying changes is always an error-prone process. So you should consider how to check whether any changes applied lead to undesired results (interruption of services). Because of that, automated tests might be as helpful after network configuration changes as they are in software development. For some tests your can employ network and device monitoring software, if available at your site. But some tests might take quite a long time or only make sense to run once or twice after actual configuration changes. Because of that they might not fit well into a monitoring system which primary use is to monitor constantly. So one task to keep in mind is to write test scripts which will show you whether configuration changes applied to network components work out OK.

Revisiting DNS testing: Checking SOA serial matching across servers

August 18, 2011 Leave a comment

What to find in this updated script

We will add ‘SOA serial value checking’ to our DNS testing Powershell script. This new version of the script shows you how to do advanced DNS queries with Powerscript. It’s also an example how to reuse .NET assemblies available in EXE format.

The story behind this update

Just last week I’ve spent the better part of a day trying to find out why one of the domains hosted by us had name resolution problems. To make things more complicated, these problems did depend on the actual internet provider (read: name server) of the clients. After a long debugging session and kind help from two other institutions which are hosting secondary zones for us on their name servers, I finally was able to ironing out the problems with our DNS configuration. And like it is quite often the case, I have to blame myself for causing these problems. But the  root cause of these problems is not the issue for this post. Instead it prompted me to augment my DNS testing Powershell script. I’ve now added code to enable checking whether the ‘SOA serial’ is the same for all name servers hosting a zone. This test simply shows whether all name servers for your zone are ‘up to date’. If one or more of the servers hosting a secondary copy of your zone are lagging behind, you probably have a zone transfer problem that you might want to fix as soon as possible.

A SOA serial consistency test is also included in online services like the DNSreport in DNSstuff Professional Toolset. The DNSreport there goes far beyond my little script and might easily be worth its price per year. But my goal here is automation and repeatedly checking name resolution for 20+ domains via a web interface is not my idea of a good day at the office. One idea here would be to use Powershell to automatically fill the DNSreport web form, send the request and parse the HTML results. But relying on the availability and unchanging layout of a website is one dependency too much for my own taste. And since we’re talking about checking DNS here, you’ll might even end up in a situation where you have to run the tests because external web pages like the one for DNSstuff are not accessible due to current DNS configuration problems.

So what is involved in checking SOA serials for a zone? First of all we have to look up the name server (NS) entries for this zone. Then we have to ask all of these servers to tell us their SOA serial value for this zone. Finally, we can compare the results to see if there are any differences between them.

Implementation

The next questions for the implementation are: using Powershell, how do we get the name server information for a zone and how do we get these SOA serial values from different name servers listed for the zone? On the command line all this information can be obtained by employing nslookup to the task. Entering “set type=ns” at the nslookup command prompt and then querying a zone name gives you a list of the name servers registered for a zone. After that “server ” followed by one of the names tells nslookup to query this name server directly. Issuing “set type=soa” and then querying a zone name gives you the full SOA record. This record will tell you which name server is the primary one, an administrative contact and the serial value for the zone. The serial value is usually (outside the MS AD-world) constructed from a date in ISO format plus two more digits depicting the ‘version of the day’.

If you try these commands with nslookup you notice that the ‘answer format’ differs very much depending on the question being asked. We’re always getting a bunch of text lines which ‘somewhere’ contain the information we’re interested in. When using nslookup from Powershell we would have to extract this information either by using purely positional extraction (‘result line X, character position A to B’) or to use regex pattern matching (‘get the lines containing the pattern “nameserver = ” and extract everything after the equal sign’). While this is definitely a possible way to go, personally I don’t feel too well whenever I have to rely on the assumption that the result format of external applications like nslookup doesn’t change. For that reason I was looking into alternative implementations. Since DNS resolution is a very important service in every computer system, my first guess was that there must be something in the .NET Framework for that purpose. Unfortunately, System.DNS gives you only basic DNS resolution capabilities like forward and reverse lookup of host names and addresses. It won’t even allow you to specify the DNS server to use, so you’re stuck with the one which is configured for your active NIC. Fortunately a quick Google search revealed that somebody already had a go at this shortcomings of the .NET DNS class. This CodeProject page by Alphons van der Heijden is about a GUI-based .NET utility offering the functionality of the dig implementation found e.g. in the ISC BIND package. For our script we won’t use the GUI part included in this utility. But we can still use it from Powershell since .NET assemblies allow you to directly reuse public classes contained in them. And that holds true even if the assembly is an EXE file and not a DLL – which is the traditional assembly format for reuse of compiled code.

Another way to reuse the code by Alphons van der Heijden with Powerscript can be found here: Joel Bennett wrapped it up to be used straight forward as a Powershell cmdlet.

After all these explanations, here finally is the updated Powershell script:

# Test-IPResolution.ps1
# (c) Marcus Schommler, 2011
# Testing DNS resolution for multiple DNS server and multiple host names.
# The configuration and test data is read from an XML file.
# Version 2:
# -	added SOA serial value lookup option over all nameservers listed for a domain/zone
# - using a .NET based assembly implementing DNS resolving capabilities

# ==========================================
# Open issues & possible enhancements:
# - It might be of interest whether the answer of a DNS server is non authoritative.
# - Also it might be helpful to specify for test cases whether an authoritative answer is expected.
# - XML could be used for output as well...

# assertEquals()
# inspired by: http://www.leeholmes.com/blog/2005/09/05/unit-testing-in-powershell-%E2%80%93-a-link-parser/
function assertEquals (
	$expected = "Please specify the expected object",
	$actual = "Please specify the actual object"
    )
{
	if(-not ($expected -eq $actual)) {
		$res = "FAILED. Expected: $expected.  Actual: $actual."
	} else{
		$res = "OK. $actual";
	}
	$res
}

# forward_lookup(): Do DNS forward lookups by using nslookup
# inspired by: http://powershellmasters.blogspot.com/2009/04/nslookup-and-powershell.html
Function forward_lookup ($hostname, $dns_server) {
	# Build command line including stderr redirection to null device:
	$cmd = "nslookup " + $hostname + " " + $dns_server + " 2>null"
	$Error.Clear()
	$global:nonauthAnswer = $false
	$global:controlladns = $false
	$result = Invoke-Expression ($cmd)
	trap {
		$global:controlladns = $true
		$solved_ip = "0.0.0.0"
		continue
	}
	if ($Error.Count -gt 0) {
		# nslookup does output to stderr if a name resolution result is not authoritative.
		# Simplified assumption here: This is the only reason for generating error output.
		# echo "answer not authoritative"
		$global:nonauthAnswer = $true
	}

	# Line 4 of the nslookup-Output contains the resolved IP address
	# -> check and extract by (simplified) pattern matching:
	if ($result.SyncRoot[4] -match "([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)") {
		$solved_ip = $matches[1]
	}
	$solved_ip
}

# DNS query type values
# nameserver queries:
[Heijden.DNS.QType] $qtype_NS = 1
# SOA queries:
[Heijden.DNS.QType] $qtype_SOA = 6

# helper function for checking SOA serial values across nameservers:
function compare_soa_serials([REF]$last_serial, $this_serial, $zone) {
	if ($last_serial -eq $null) {
		$last_serial = $this_serial
	} else {
		if ($last_serial -ne $this_serial) {
			# the serials don't match -> note a failure
			$fails += $zone + ": SOA serial mismatch between name servers"
		}
	}
}

# function for getting the SOA serial values from all nameservers for a domain (zone)
function get_soa_serials($start_dns, $zone) {
	# get a new resolver, the parameter to the constructor specifies the DNS server to start with:
	$my_resolv = New-Object Heijden.DNS.Resolver($start_dns)
	$curr_serial = $null
	$nservs = [Heijden.DNS.Response] $my_resolv.Query($zone, $qtype_NS)
	foreach ($rr in $nservs.RecordsRR) {
		if ($rr.type -eq "NS") {
			# found a name server record
			# set the DNS server to use and query the SOA record:
			$nsdname = $rr.record.nsdname
			$my_resolv.DnsServer = $nsdname
			$res = [Heijden.DNS.Response] $my_resolv.Query($zone, $qtype_SOA)
			if ($res.header.rcode -eq "REFUSED") {
				Write-Host $zone $nsdname": query refused"
			} else {
				# there should always be only one SOA result record...
				foreach ($soa in $res.RecordsSOA) {
					Write-Host $zone $nsdname ": "  $soa.serial
					compare_soa_serials $curr_serial $soa_serial $zone
				}
			}
		} elseif ($rr.type -eq "SOA") {
			# we directly got a SOA record back
			Write-Host $zone $rr.record.mname ": " $rr.record.serial
			compare_soa_serials $curr_serial $rr.record.serial $zone
		}
	}
}

## main code section ##

# get the path of the currently running script (we need this for LoadFrom()):
$script_path = Split-Path -parent $MyInvocation.MyCommand.Definition

# We're using the DNS .NET resolver utility written by Alphons van der Heijden
# found on this CodeProject page: http://www.codeproject.com/KB/IP/DNS_NET_Resolver.aspx
# To make this script work, you just have to download http://www.codeproject.com/KB/IP/DNS_NET_Resolver/Article_Demo.zip
# and put the dnsdig.exe from the archive into the directory of this script.

# Calling Powershell V2.0 Add-Type to load an assembly won't work in our case
# since it's lacking support for EXE assemblies. So we use reflection
# to load the EXE assembly containing the resolver class to be used:
$null = [System.Reflection.Assembly]::LoadFrom("$script_path\dnsdig.exe")

# read all settings and test cases from an XML file:
$test_settings = [xml]( Get-Content .\dns-tests-2.xml )

# get start node for the dns server groups to use:
$server_groups = $test_settings.SelectNodes("ruth/dns_server_group")
# get start node for the test cases to check:
$test_sets = $test_settings.SelectNodes("ruth/dns_test_set")

# init array for storing information about failed tests:
$fails = @()

# iterate over test sets found:
foreach ($set in $test_sets) {
	# lookup up which group of DNS servers should be used for this test set:
	$sg_name = $set.getAttribute("dns_server_group")

	$check_soa_serials = $set.getAttribute("check_soa_serials")
	$check_soa_serials = ($check_soa_serials -eq "1")

	# get the individual dns server nodes for this dns server group:
	$sg = $test_settings.SelectNodes("ruth/dns_server_group[@id='$sg_name']/dns_server")

	Write-Host "Test-Set: " $set.GetAttribute("id")

	# get the actual tests to perform
	$tests = $set.SelectNodes("dns_test")
	# iterate over the servers in the server group:
	$first_dns_server = $null
	foreach ($server in $sg) {
		# iterate over the test cases:
		if ($first_dns_server -eq $null) {
			#save name of first dns server from group for later use
			$first_dns_server = $server.getAttribute("ip")
		}
		foreach ($test in $tests) {
			# extract the needed information for test execution from various xml nodes:
			$hostname = $test.getAttribute("host")
			$expect_ip = $test.getAttribute("ip")
			$dns = $server.getAttribute("ip")
			# execute the test:
			$ip = forward_lookup $hostname $dns

			# check the result of the dns resolution against the expected IP address:
			$res = assertEquals $expect_ip $ip
			# generate output:
			$s1 = "DNS: $dns, Host: $hostname : "
			if ($res.contains("FAIL")) {
				$fails += $s1 + $res
			}
			Write-Host $($s1 + $res)
		}
	}
	if ($check_soa_serials) {
		# if SOA serial value testing is to be done, we prepare a list of domains (zones)
		# by iterating once again over the test set.
		$domains = @{}
		foreach ($test in $tests) {
			$hostname = $test.getAttribute("host")
			# extract the domain part from the fully qualified host name:
			$dom = $hostname.substring($hostname.IndexOf(".") + 1)
			$domains[$dom] = 1
		}
		# now iterate over the distinct domains (zones) found:
		foreach ($zone in $domains.keys) {
			# do the SOA serial value check starting with the first dns server from the test group:
			get_soa_serials $first_dns_server $zone
		}
	}
	Write-Host "===== End Test-Set: " $set.GetAttribute("id")
}

# Output of a Summary: To have only the failed tests all in one coherent block of text,
# we repeat the output just for them:
Write-Host "============"
Write-Host "All Failures"
foreach ($f in $fails) {
	Write-Host $f
}

Categories: powershell

After a long time my mind finally made this connection…

August 7, 2011 Leave a comment

After using Powershell for a few months now, only just today I stumbled upon this analogy: In New Zealand you’ll find molluscs called Paua. Sitting right here in my bathroom are the remains of one of these – a Paua shell.
Besides the homophony, there is another striking similarity between a Paua shell and Powershell: In their natural state they’re both quite ugly. That’s how a Paua looks like when taken from the ocean:

And like with Powershell, only if you invest some work and do some polishing you’ll get results with a shine:
Paua Shell

But the Paua still has a big advantage over Powershell: It doesn’t take you a long time to get a decent meal out of the former!:-)

Categories: powershell Tags: ,

Adding some beef: Multi server multi host DNS resolution testing

August 6, 2011 Leave a comment

When you’re hosting multiple web presences and you’re also being responsible for the DNS name resolution zones, you might find yourself in a situation where touching anything DNS-related is giving you a bad feeling even before you start. It is getting even worse if for some reason you also have to deal with a ‘split brain’ DNS setup (serving different IP addresses for the same host name depending on who is asking, either an external or internal client).

For helping me in that situation I started to build a script that allows me to run automated DNS resolution tests against groups of DNS servers. All configuration and test data is kept external to the script in an XML file. It’s astonishing how easy it is to work with XML in Powershell. Some introductory other examples show you just how to select nodes and manipulate XML to generate XML output again. This script uses the data structures built by reading an XML file to directly iterate over XML node subsets with the purpose to generate the DNS test command lines, for looking up the DNS servers to execute the test against and to check whether the results meet the expected IP addresses specified beforehand.

The actual DNS lookup is done wrapping a call to nslookup. This is because the alternative .NET class System.Net.Dns used in my last post doesn’t allow you to specify the DNS server to be used, it will always use the one from the local NIC configuration.

# Test-IPResoluton.ps1
# (c) Marcus Schommler, 2011
# Testing DNS resolution for multiple DNS server and multiple host names.
# The configuration and test data is being read from an XML file.

# ==========================================
# Open issues & possible enhancements:
# - It might be of interest whether the answer of a DNS server is non authoritative.
# - Also it might be helpful to specify for test cases whether an authoritative answer is expected.
# - XML could be used for output as well...

# assertEquals()
# inspired by: http://www.leeholmes.com/blog/2005/09/05/unit-testing-in-powershell-%E2%80%93-a-link-parser/
function assertEquals (
    $expected = "Please specify the expected object",
    $actual = "Please specify the actual object"
    )
{
    if(-not ($expected -eq $actual)) {
        $res = "FAILED. Expected: $expected.  Actual: $actual."
    } else{
        $res = "OK. $actual";
    }
    $res
}

# forward_lookup(): Do DNS forward lookups by using nslookup
# inspired by: http://powershellmasters.blogspot.com/2009/04/nslookup-and-powershell.html
Function forward_lookup ($hostname, $dns_server) {
    # Build command line including stderr redirection to null device:
    $cmd = "nslookup " + $hostname + " " + $dns_server + " 2>null"
    $Error.Clear()
    $global:nonauthAnswer = $false
    $global:controlladns = $false
    $result = Invoke-Expression ($cmd)
    trap {
        $global:controlladns = $true
        $solved_ip = "0.0.0.0"
        continue
    }
    if ($Error.Count -gt 0) {
        # nslookup does output to stderr if a name resolution result is not authoritative.
        # Simplified assumption here: This is the only reason for generating error output.
        # echo "answer not authoritative"
        $global:nonauthAnswer = $true
    }

    # Line 4 of the nslookup-Output contains the resolved IP address
    # -> check and extract by (simplified) pattern matching:
    if ($result.SyncRoot[4] -match "([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)") {
        $solved_ip = $matches[1]
    }
    $solved_ip
}

## main code section ##

# read all settings and test cases from an XML file:
$test_settings = [xml]( Get-Content .\dns-tests-2.xml )

# get start node for the dns server groups to use:
$server_groups = $test_settings.SelectNodes("ruth/dns_server_group")
# get start node for the test cases to check:
$test_sets = $test_settings.SelectNodes("ruth/dns_test_set")

# init array for storing information about failed tests:
$fails = @()

# iterate over test sets found:
foreach ($set in $test_sets) {
    # lookup up which group of DNS servers should be used for this test set:
    $sg_name = $set.getAttribute("dns_server_group")

    # get the individual dns server nodes for this dns server group:
    $sg = $test_settings.SelectNodes("ruth/dns_server_group[@id='$sg_name']/dns_server")

    Write-Host "Test-Set: " $set.GetAttribute("id")
    # get the actual tests to perform
    $tests = $set.SelectNodes("dns_test")
    # iterate over the servers in the sever group:
    foreach ($server in $sg) {
        # iterate over the test cases:
        foreach ($test in $tests) {
            # extract the needed information for test execution from various xml nodes:
            $hostname = $test.getAttribute("host")
            $expect_ip = $test.getAttribute("ip")
            $dns = $server.getAttribute("ip")
            # execute the test:
            $ip = forward_lookup $hostname $dns

            # check the result of the dns resolution against the expected IP address:
            $res = assertEquals $expect_ip $ip
            # generate output:
            $s1 = "DNS: $dns, Host: $hostname : "
            if ($res.contains("FAIL")) {
                $fails += $s1 + $res
            }
            Write-Host $($s1 + $res)
        }
    }
    Write-Host "===== End Test-Set: " $set.GetAttribute("id")
}

# Output of a Summary: To have only the failed tests all in one coherent block of text,
# we repeat the output just for them:
Write-Host "============"
Write-Host "All Failures"
foreach ($f in $fails) {
    Write-Host $f
}

And that’s how an XML file to be used with this script could look like:

<?xml version="1.0" standalone="yes"?>
<ruth>
	<dns_server_group id="Gesis external lookup">
		<dns_server note="dns2.gesis.org" ip="194.95.75.2" />
		<dns_server note="unix1" ip="134.95.45.3" />
	</dns_server_group>
	<dns_server_group id="Google public DNS servers">
		<dns_server note="goopub1" ip="8.8.8.8"/>
		<dns_server note="goopub2" ip="8.8.4.4"/>
	</dns_server_group>
	<dns_test_set id="gesis.org external" dns_server_group="Gesis external lookup">
		<dns_test host="www.gesis.org" ip="172.16.4.252" />
		<dns_test host="ftp.bonn.gesis.org" ip="193.175.238.3" />
		<dns_test host="jws.bonn.gesis.org" ip="194.95.75.5" />
		<dns_test host="listserv.bonn.gesis.org" ip="193.175.238.78" />
		<dns_test host="webmail.koeln.gesis.org" ip="134.95.45.6" />
		<dns_test host="download.za.gesis.org" ip="134.95.45.13" />
	</dns_test_set>
	<dns_test_set id="PartnerHosting external" dns_server_group="Gesis external lookup">
		<dns_test host="www.asi-ev.org" ip="193.175.238.210"/>
		<dns_test host="www.dgo-online.org" ip="193.175.238.92"/>
		<dns_test host="www.iconnecteu.org" ip="193.175.238.140"/>
		<dns_test host="www.soziologie.de" ip="194.95.75.36"/>
	</dns_test_set>
	<dns_test_set id="PartnerHosting external - google dns" dns_server_group="Google public DNS servers">
		<dns_test host="www.asi-ev.org" ip="193.175.238.210"/>
		<dns_test host="www.dgo-online.org" ip="193.175.238.92"/>
	</dns_test_set>
</ruth>

Categories: powershell Tags: , , , , ,
Follow

Get every new post delivered to your Inbox.