Quantcast
Channel: DNS, DHCP, IPAM (IP Address Management) | Calleva Networks
Viewing all 62 articles
Browse latest View live

Come and chill with us at London Ice Bar!


BYOD and IoT – How To Manage And Secure More Devices On Your Wi-Fi

$
0
0

aerohive-callevaThe Internet-of-Things (IoT) will see more devices connecting to each other via wireless technologies. The webinar is for IT professionals looking to learn, implement or expand wireless networks in their organisation.

This on-demand webinar was originally presented on 12th March 2015 and covers:

  • The role of 802.11ac in networks today and tomorrow
  • Balancing flexibility and security through context-based network access
  • How context-based networking is the key to performance

ah-video-thumb

 


 

The post BYOD and IoT – How To Manage And Secure More Devices On Your Wi-Fi appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Gartner Report: Market Guide for DNS, DHCP, and IP Address Management (DDI)

$
0
0

gartner-logo-300wGartner has published its 2015 report on DDI – the product category for managing DNS, DHCP, and IP addresses to improve IT infrastructure availability and efficiency. DDI solutions are an increasingly important aspect of private cloud initiatives due to their ability to help automate both DNS and Internet Protocol (IP) address management functions.

“DDI helps improve the availability of critical IT infrastructure while reducing operational expenditures,” the report says. “Gartner estimates that usage of a commercial DDI solution can reduce opex related to DNS/DHCP and IP address management by 50% or more.”

View the full report here (courtesy of Infoblox) – no registration required.

The post Gartner Report: Market Guide for DNS, DHCP, and IP Address Management (DDI) appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Webinar: 802.11n is dead, and with it the WLAN controller

$
0
0

aerohive-callevaWLAN controllers form bottlenecks and single points of failure that are unacceptable in today’s “always-on” world. Aerohive provide wireless infrastructure without using WLAN controllers and support the latest 802.11ac standards.

This on-demand webinar was originally presented on 9th April 2015

Investing in network infrastructure is not a trivial decision and requires appropriate consideration. Whether your organisation is thinking about making the move or is already actively planning for the next evolution of Wi-Fi, join this webinar to learn more about the benefits of Gigabit Wi-Fi.

The webinar is for IT professionals looking to learn, implement or expand wireless networks in their organisation.

This webinar will cover:

  • How to plan an infrastructure to meet the demand of the mobile invasion
  • Why increasing capacity means more than just adding access points
  • What the evolution of Wi-Fi looks like for your organisation

ah-video-thumb

 

 


 

The post Webinar: 802.11n is dead, and with it the WLAN controller appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Gartner Names Silver Peak a Leader in Magic Quadrant for WAN Optimisation

$
0
0

sp-gartner-300w

Silver Peak Solves Critical Requirements of an SD-WAN

Gartner has named Silver Peak a Magic Quadrant Leader. Be among the first to see the latest report by completing the form below.

According to Gartner, as WAN optimisation appliances increasingly include WAN path control and local link load balancing capabilities, these products are morphing into SD-WAN solutions.

WAN requirements are evolving rapidly. Enterprise customers are becoming more frustrated with the high cost and complexity of MPLS networking, and until now, an inability to easily leverage lower-cost Internet in a secure, controlled and optimised manner.

We believe Silver Peak is best-positioned to help you meet these new WAN requirements. With Dynamic Path Control, Silver Peak offers the most comprehensive solution today for building an SD-WAN fabric that helps you migrate to an enterprise-grade WAN using hybrid or all-Internet WAN connectivity across a distributed environment.

silver-peak-mq-2015

Please complete the form below to view your complimentary report, and learn why Gartner named Silver Peak a WAN Optimisation Magic Quadrant leader.

First Name *
Last Name *
Company *
Phone No. *
Email Address *
* indicates required field.

 

The post Gartner Names Silver Peak a Leader in Magic Quadrant for WAN Optimisation appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Webinar: Wi-Fi Security – How to prevent your network being hacked by a kettle

$
0
0

aerohive-calleva

 

Today someone brought a Wi-Fi enabled kettle to work, the next week our CEO heard about Wi-Fi enabled lightbulbs to help increase building efficiency, now our IT team is swamped trying to support all manner of devices alongside the traditional challenges of BYOD and guest mobility requirements.

Wi-Fi is continually increasing in complexity, especially when it comes to security. Every day there are new users, devices, and applications demanding access, and it’s the Wi-Fi’s job to protect the network, often both from the inside and out.

Join this webinar to learn:

  • Why context-based network access is crucial for any Wi-Fi deployment
  • The challenges of BYOD and IoT
  • Why applications are being controlled at the edge of your network

This on-demand webinar was originally presented on 12th May 2015

 

ah-video-thumb


 

The post Webinar: Wi-Fi Security – How to prevent your network being hacked by a kettle appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Tolly report validates Infoblox Cloud Network Automation savings

$
0
0

Tolly Group logo (cloud)

Tolly Group, a leading independent IT testing firm, has found that automation of core network services—DNS, DHCP and IP addresses (DDI)—can reduce the deployment time for virtual machines in a VMware private cloud environment by 62 percent.

Private clouds are rapidly emerging as the infrastructure platform of choice for many organisations because of the speed and versatility they offer in launching new services. Many organisations are now moving mission-critical applications onto these private clouds. When provisioning new resources such as virtual machines (VMs), network setup is often a choke point, hobbled by manual processes that cause delays, require hand-offs across multiple IT teams, and introduce errors.

To clearly quantify this network automation gap, and shed light on the impact of lost time and productivity, Infoblox commissioned a study by Tolly Group, a third-party certification and testing firm. This test, conducted by Tolly engineers, measured the time required for discovering, tracking, provisioning, and destroying VMs with IP addresses and DNS records in a VMware private cloud when using manual processes in comparison to the new Infoblox Cloud Network Automation.

Here are some of the results as documented in the report:

  • The time to inventory, create, and destroy 500 VMs per week in Tolly’s test environment—designed to mimic a real-world large enterprise—dropped from 42 hours when using manual processes to just 16 hours with Infoblox. This is a reduction of 62 percent, and doesn’t include any time saved from not having to resolve outages due to errors from manually tracking IP addresses and DNS records.
  • Discovering network information for 20 existing VMs—such as virtual data centre location, virtual cluster location, and tenant IDs—took 17 minutes manually and just 45 seconds with Infoblox.
  • Finding available IP addresses when all the addresses in a typical subnet are taken required two minutes manually and only 30 seconds with Infoblox.

To download a free copy of the report, please complete the following form:

First Name *
Last Name *
Company *
Phone No. *
Email Address *
* indicates required field.

 

The post Tolly report validates Infoblox Cloud Network Automation savings appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Is DANE DNSSEC’s killer app?

$
0
0

padlock-security-300x200DANE has been around for a few years now but still seems to be a bit of an underground topic. It hardly ever crops up in conversations I have with prospects and the fact it is reliant on DNSSEC, which takes a serious commitment to implement, makes me wonder if this is just another good idea that is going nowhere.

However, once you understand what DANE can do, it makes me wonder why more people aren’t implementing it. In fact DANE could be just the “killer app” that DNSSEC needs in order to make the implementation of DNSSEC a “no-brainer”.

So what is DANE and why could it be such a revolution?

DANE should not be confused with a not very famous nineties pop star, it actually stands for “DNS-Based Authentication of Named Entities” and was ratified in RFC 6698 way back in 2012.

DANE, at its simplest, allows the Internet to “do away” with certificate authorities and instead store SSL/TLS certificates as DNS records. These records can then be signed with DNSSEC to verify that they are authentic and have not been tampered with. Because DNSSEC relies on a chain of trust, your organisation can be responsible for storing, signing and revoking certificates and everyone else on the Internet can be confident that your certificates are valid because your DNSSEC enabled domain has been trusted by your parent delegating authority, who in turn has been trusted by their parent and so on all the way up to the DNS root.

This means there is no longer any need to maintain a list of trusted certificate authorities in your browser (or operating system) and is the reason why DANE is such a game changer.

Who do you trust?

Currently each browser or piece of software that establishes connections over SSL has to be able to validate the certificate that it is presented with. But how do you know if the certificate that is being presented is valid? Normally the certificate has been signed by an intermediate certificate authority (CA) that is trusted by a root CA – and you will have a list of the root CA’s stored in your browser (or operating system).

Currently, Chrome on my laptop (which uses the Windows certificate store) has 48 root CA’s listed, but many more are available via Windows Update (Microsoft has a 32 page PDF file here that lists them all). Different operating systems will have different mechanisms available to maintain the list of trusted root CA’s, but the problem is that with so many root CA’s being trusted it is difficult to be 100% confident that they are all genuine or have not been compromised, or are not being used to snoop on SSL traffic (see article “Hacked Certificate Authorities – Nothing Left to Trust”). You have no personal or business relationship with the providers of these root CA’s, yet you are trusting them with guarding your private information. Isn’t this a bit like giving everyone in your street a piece of paper with your Internet banking details on and hoping they don’t get burgled?

You also have very little control over which root CA is used to ultimately verify an SSL certificate. What if you visit a site and the certificate claims to be trusted by a root CA in China or Saudi Arabia? Microsoft’s root CA list in that PDF file I mentioned earlier does not list any root CA’s in Iraq/Iran/Syria/North Korea etc., but both China and Saudi Arabia operate root CA’s and both are well known proponents of Internet censorship. Would you be happy knowing that your SSL traffic could potentially be intercepted by the governments of these countries?

Because DANE removes the need to maintain and trust a list of root CA’s (whom you have no personal or business relationship with) it removes a lot of these “trust” issues. Your SSL/TLS certificates are signed by your DNSSEC private key, which you can change at your leisure, and your domain is trusted by your parent (with whom you have a business relationship with as they had to sign your “DS” records). Firstly people who need to validate your SSL/TLS certificates only need to trust the DNS root domain, and secondly, if someone manages to compromise your DNS domain, the DNS TLSA records (used to store the certificates) will not validate, causing browsers worldwide to issue certificate warnings and/or block access to the secure elements of your site.

Adoption, as usual, is the biggest challenge

Switching from a CA-based system to DANE is obviously a massive change, but there is a half-way house which might help, and that is to allow DANE to “nominate” a set of specifically trusted CA’s (using DNS TLSA records that have again been signed by DNSSEC). So the CA mechanism can still be used by client software (i.e. retrieve the certificate from a CA rather than via DNS), but DANE is just used to specify which CA’s should be used to validate your certificates. Another option is to use DANE to store a fingerprint of your certificates, so you can check that the certificate returned by the CA matches the fingerprint you have stored in DNS (i.e. a double-check that it has not been compromised).

The main problem at the moment however is one of support. Neither Internet Explorer nor Chrome provide native support for DANE, although a plug-in is available here. So web browsing will probably have to continue using the current “broken” trusted root CA-based system for the time being, but other use cases are appearing, notably secure delivery of email over TLS as supported in Postfix v2.11 and above (see interesting article discussing sending secure email using Postfix and DANE here). And of course, DNSSEC also needs to be deployed, which has its own set of challenges.

Maybe if Chrome and Internet Explorer start including native support for DANE, it will start to gain momentum, which will in turn drive DNSSEC adoption. It would be great to see DANE gain widespread adoption as I believe it could go a long way to making the Internet a safer, more secure place.

Unfortunately, there is a huge business that revolves around issuing and validating X.509 certificates and DANE could jeopardise that revenue stream, which ultimately may prove to be the main reason why it doesn’t take off.

The post Is DANE DNSSEC’s killer app? appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.


Webinar: Fast Wi-Fi – Measuring More Than Gbps

$
0
0

aerohive-calleva

Experience first-hand the full feature set of an Aerohive Access Point. Attend this webinar on Thursday 9th July 2015 at 11:00 BST and Aerohive will send you an 802.11ac Access Point free of charge*.

Register now

802.11ac is propelling Wi-Fi beyond the Gigabit barrier, and ensuring that organisations have sufficient bandwidth to cope with the tidal wave of corporate, personal, guest, and IoT devices flooding onto the network.

In 2015, speed is important, but organisations are starting to measure performance of Wi-Fi solutions from a different point of view. Join this webinar to learn how to evaluate and measure the success of your mobile users, devices, applications and organisation.

Join this webinar to learn:

  • How to quickly onboard BYOD, guest and IoT devices
  • How to prepare a redundant infrastructure, and quickly recover from issues
  • How to utilise the cloud for rapid deployment, visibility and support

Webinar details:

Date: Thursday 9th July 2015
Time: 11:00 BST
Duration: 45 mins with Q&A

Click below to register for the webinar:

Register now

 

 

* Free (802.11ac) Access Point offer:
Qualified attendees will receive a free Access Point as per the qualification criteria available on the registration page

 


 

The post Webinar: Fast Wi-Fi – Measuring More Than Gbps appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Calleva Networks launches free DNS malware assessment

$
0
0

malware-laptopMore and more malware is using DNS not only to contact command-and-control servers but as a data exfiltration mechanism that can see your valuable and precious data “leaking” out from the confines of your organisation.

DNS is normally left relatively unsecured compared to other protocols because so many applications and servers depend upon it. Through the DNS forwarding mechanism, internal DNS servers can often resolve Internet DNS names either directly or via DNS servers located in a corporate DMZ.

However, malware can piggy-back onto these queries and use this mechanism to send sensitive corporate data out to the Internet as simple DNS queries. To a next-generation firewall that is looking for suspicious activity, they just look like normal DNS queries, however to a dedicated DNS firewall device, this traffic can be detected and blocked.

DNS firewalls are quite a new proposition, but convincing organisations to deploy them can be a challenge. That’s why we are offering a free DNS malware assessment to check if your organisations DNS servers are being targetted by malware.

Click here for instructions on how to take advantage of this free service.

 

The post Calleva Networks launches free DNS malware assessment appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Should a DNS Firewall be part of your defence-in-depth strategy?

$
0
0

Infoblox Reporting server automatically identifies infected devices when malware attempts to call home, reduces time and cost for removing APT malwareThere has been a slew of DNS Firewall related market activity recently that makes me wonder if DNS Firewall related products/solutions are finally gaining market acceptance.

OpenDNS is probably one of the most well known DNS Firewall vendors, operating a global network of recursive servers that anyone can use for free, but with the option to filter and block queries that might be attempting to resolve undesirable domain names (such as adult or gambling sites, or sites hosting malware). They have been doing this successfully for a number of years, but Cisco recently announced its intention to acquire OpenDNS for the tidy sum of $635m in cash. That certainly raised eyebrows at Calleva Networks HQ!

Just a week later, Verisign announced the launch of an OpenDNS rival, imaginatively called “Verisign DNS Firewall”. This is another cloud based solution, and quite clearly if Verisign and Cisco are both trying to get in on the act, then there must be a demand from the market for these kinds of technologies.

But what about if you want to run an on-premise solution? Well the good news is that all the main DDI vendors have also now launched DNS Firewall solutions, so you can sanitise your DNS queries before they get sent out to the cloud.

In order to encourage companies to evaluate a DNS Firewall, Calleva Networks recently launched a DNS malware assessment service. Our first assessment was for a large European petro-chemical group and during 90 minutes of DNS packet capture we identified numerous DNS queries associated with Cryptolocker ransomware domains, Ponmocup botnet communications and lookups for domains that are on various active blacklists published by Alienvault, Threatstop, Malwaredomainlist and others. The results were truly terrifying! How did this stuff get onto the network in the first place? Clearly traditional perimeter based security solutions cannot hope to prevent the onslaught of malware encrusted mobile devices, so organisations attempt to combat these with MDM solutions and strict Wi-Fi authentication, but clearly malware can still get inside a organisation despite these often complex and expensive solutions.

When taken in context of all the other DNS related threats, such as DNS based DDoS and data exfiltration attacks, we believe that organisations should be considering dedicated DNS security products to add an additional layer to the defence-in-depth paradigm. The good news is that Calleva Networks can provide solutions to protect an organisation from internal threats as well as external threats, which can affect an organisations web presence and reputation.

Please feel free to contact us or comment below to discuss this topic in more detail.

The post Should a DNS Firewall be part of your defence-in-depth strategy? appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

CVE-2015-5477: Sorry, you will need to patch if you’re running BIND!

$
0
0

isc-logo-300wWe don’t normally get too involved with discussing or publishing details about bugs and patches for BIND, however due to the severity of CVE-2015-5477, it has prompted a couple of customers to email me directly who I think just wanted a second opinion.

Basically, yes, you do have to patch BIND!

Unfortunately, the news from ISC is that this is pretty serious. To quote from a blog post their Incident Manager wrote about this vulnerability:

Almost all unpatched BIND servers are potentially vulnerable. We know of no configuration workarounds. Screening the offending packets with firewalls is likely to be difficult or impossible unless those devices understand DNS at a protocol level and may be problematic even then. And the fix for this defect is very localized to one specific area of the BIND code.

He continues:

I have already been told by one expert that they have successfully reverse-engineered an attack kit.

So there you have it. A single DNS query can trigger BIND to exit and there is nothing you can do to prevent it unless you patch your servers. This is a dream for anyone who wants to launch a DDoS attack, simply spray these specially crafted malicious packets around the Internet and watch all those DNS servers crash (although technically it’s not a crash).

Securi report that they have seen this attack in the wild already. More details here.

Luckily if you are running a commercial DDI product, it should notice if BIND stops and simply restart it, meaning only a short outage. If not though, you may have an extended outage if you are not monitoring your servers and don’t detect the failure quickly.

Maybe it’s just better to patch than take the risk eh?

The post CVE-2015-5477: Sorry, you will need to patch if you’re running BIND! appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Configuring Google SafeSearch with Infoblox DNS Firewall

$
0
0

We recently did some work for a county council who wanted to enable Google SafeSearch for all the schools under their jurisdiction. Initially they were trying to use internal versions of google.com and google.co.uk with a CNAME record for www that redirected to forcesafesearch.google.com, but this is not an ideal solution for various reasons:

  • Other “google.com” services stop working, e.g. mail, docs, groups, maps etc. requiring a plethora of additional DNS records
  • Creating a zone called “www.google.com” helps catch just the www record, but you have to hard code a domain apex A record as you can’t use an apex CNAME (contravenes the DNS RFC’s)
    • Users can bypass the www record anyway by querying the apex “google.com”
  • There are many TLD’s that google uses, configuring DNS to catch every single one is an onerous task
    • DNAME records can be used to alias the various different google TLD’s, but there’s a lot to configure and it still does not solve all the other problems

An easier way to do this is to use the RPZ (Response Policy Zone) feature in BIND to re-write the query so that it always resolves to “forcesafesearch.google.com”. There are many articles about how to configure RPZ in BIND, but we are primarily interested in how to do this on Infoblox using the DNS Firewall feature. Unfortunately the Infoblox admin guide is not particularly clear about how to implement it, hence the reason for this article!

First make sure you have an RPZ license installed on every Infoblox member that is going to be re-writing the queries. You can use the “set temp_license” command from the command line and select option 13 to get a 60 day license if you want to try it. We are going to run through an example using some Infoblox systems in our lab.

The first thing to do is define the RPZ zone – click on Data Management->DNS->Response Policy Zone and click the “+” button:

add rpz

This initiates the Add Response Policy Zone Wizard. Infoblox DNS Firewall has the ability to take a feed of suspicious domains and block them for security purposes, but in this example we are not going to use that feature, we just need to add a local response policy zone, so for step 1 of the wizard we just select “Add Local Response Policy Zone”.

For step 2 of the wizard we give the RPZ a name that identifies the zone as a local one, the name does have to comply with DNS naming conventions as it forms part of the FQDN that DNS uses to load the zone – just think of it as a zone file containing various entries, like any other zone. So in our example we call it “rpz.local”. We are not going to do anything with the queries at this level so we set Policy Override to None. The severity is used for logging, this is the level used by syslog so we set this to “Informational” as the log entries are not critical in our opinion.

add rpz 2

In step 3 we identify the name servers that are going to host the RPZ. In our example we have a primary name server and two secondary name servers that are configured in a default name server group to use grid replication. This is perfectly normal and may echo what you use to replicate your internal DNS zones. However this is where the Infoblox documentation is not very clear. Steps 4 and 5 of the wizard can be skipped, so if we just use our default name server group and select “Save & Close”, we get an error:

add rpz 3

What is happening is that the RPZ is being configured to use grid replication, but RPZ does not support it. It makes sense when you think about it – RPZ would normally be used to obtain a feed of malicious domains from the Infoblox threatstop service, and it does this using the standard DNS zone transfer mechanism (because threatstop is used by many people, not just Infoblox customers). So RPZ by its very nature uses standard zone transfers rather than the proprietary grid replication mechanism that Infoblox supports.

Whether you modify your existing name server group to use standard zone transfers is up to you, but I prefer to create a second name server group and use that for the RPZ – this is so that any changes do not affect any other zones hosted on the servers.

So, if you abort adding the RPZ zone, you can create a new name server group and configure the secondary servers to use zone transfers. This can be done via Data Management->DNS->Name Server Groups and then clicking on the “+” button:

add ns group 1

Go through the process of adding your grid primary server to the group, and when you start adding the grid secondary servers, ensure you select “DNS Zone Transfers” for the “Update Zones Using” option:

add ns group 2

Now that we have a suitable name server group, we can go back and use this for the RPZ zone we were trying to create:

add rpz 4

Now the zone has been created you will be prompted to restart services, this gets the zone loaded up onto the DNS servers. However once it has been loaded, you can start adding entries to it without having to do a restart every time (just like any other zone in Infoblox).

To test it, we can add a single entry into the RPZ as follows. First click on the name, rpz.local to open the zone, then click on the arrow next to “+” to see a list of options, for this example we are going to choose “Substitute Domain Name (Domain Name) Rule”:

add rpz record 1

For our example we are going to redirect “www.google.com” to “forcesafesearch.google.com”, so populate the query and substituted name fields:

add rpz record 2

Because the zone is already loaded, this will take immediate effect. To test it we can use nslookup. If we send a query to a DNS server that doesn’t have RPZ configured for www.google.com, this is what we get back:

C:\>nslookup www.google.com 192.168.0.1
Server: router.cn.corp
Address: 192.168.0.1

Non-authoritative answer:
Name: www.google.com
Addresses: 2a00:1450:4009:80d::2004
212.56.71.59
212.56.71.20
212.56.71.40
212.56.71.34
212.56.71.54
212.56.71.25
212.56.71.49
212.56.71.35
212.56.71.45
212.56.71.55
212.56.71.24
212.56.71.30
212.56.71.44
212.56.71.39
212.56.71.50
212.56.71.29

Now if we send the same query to the Infoblox server we just configured, this is the result:

C:\>nslookup www.google.com 192.168.0.61
Server: te820-1.cn.corp
Address: 192.168.0.61

Non-authoritative answer:
Name: forcesafesearch.google.com
Address: 216.239.38.120
Aliases: www.google.com

So the name www.google.com was intercepted and changed to forcesafesearch.google.com by the RPZ rule.

What happens if we query the Infoblox server for google.com?

C:\>nslookup google.com 192.168.0.61
Server: te820-1.cn.corp
Address: 192.168.0.61

Non-authoritative answer:
Name: google.com
Addresses: 2a00:1450:4009:80c::200e
212.56.71.118
212.56.71.114
212.56.71.88
212.56.71.99
212.56.71.89
212.56.71.103
212.56.71.93
212.56.71.94
212.56.71.113
212.56.71.119
212.56.71.108
212.56.71.109
212.56.71.84
212.56.71.98
212.56.71.104
212.56.71.123

So, people can still bypass the safesearch by omitting “www”, this means we also need a rule for the domain apex too:

add rpz 5

Now let’s try it:

C:\>nslookup google.com 192.168.0.61
Server: te820-1.cn.corp
Address: 192.168.0.61

Non-authoritative answer:
Name: forcesafesearch.google.com
Address: 216.239.38.120
Aliases: google.com

We can successfully redirect queries for the apex domain to safesearch, but now the issue is with all the TLD’s that google operate in. If you look at this link you will find that google operate 193 domains! So to add both “www” and the domain apex RPZ rule means we have to add 386 rules!

It’s actually worse than this because there are other records inside the google domains that will resolve to the search engine page, e.g. “w”, “ww” and “m” are three that we found, and there are probably others. Instead of trying to catch just “www” it seems that we are going to have to use a wildcard, but again we have to be careful not to clobber other services like maps, docs, mail etc. The solution appears to be to use wildcards for everything apart from google.com.

Fortunately, Infoblox supports the mass creation of RPZ rules via CSV import, we just need to get the CSV file into the correct format. The easiest way to do this is to export the rules we have just created in “Infoblox CSV Import format”, modify the file to include all the google TLD’s, then import it:

rpz csv 1

The google domains list will need to be re-formatted one per line so that we can get them into a useful format. The easiest way is to use a decent text editor like notepad++ that supports regular expressions – we want to substitute all spaces (\s) with a newline (\n) character. Additionally we need to append “.rpz.local” to the end of each domain name, we can do it by replacing \s with .rpz.local\n as follows:

rpz reformat domains

Hit “Replace All” and you should end up with a list of domains, one per line. Check the last line of the list has got “.rpz.local” appended as the last line may not get modified (because there’s no space character for the search operation to find).

Using the find and replace facility we can now create two lists of domains, one for *.<google domain name> and the other for the apex domains. Open a new tab and copy all the domains into it so we have two copies to utilise.

For the first one, simply replace “.google” with “*.google”, for the second one, replace “.google” with “google”.

Now open up the CSV you exported earlier and copy each list of domains into the “fqdn” column. You can skip .com as we’ve added that one manually. Copy the other values down the rows to match. You may also want to add entries for “w.google.com” and “ww.google.com” to specifically catch these queries. You can always add new ones in through the GUI as you discover them.

To import the CSV file, use the “CSV Import” option in the toolbar.

rpz import

After you have imported the CSV file, you should now be redirecting every google search domain to forcesafesearch – a quick random check (google.cn and w.google.ae in these cases):

C:\>nslookup google.cn 192.168.0.61
Server: te820-1.cn.corp
Address: 192.168.0.61

Non-authoritative answer:
Name: forcesafesearch.google.com
Address: 216.239.38.120
Aliases: google.cn

C:\>nslookup w.google.ae 192.168.0.61
Server: te820-1.cn.corp
Address: 192.168.0.61

Non-authoritative answer:
Name: forcesafesearch.google.com
Address: 216.239.38.120
Aliases: w.google.ae

One issue is that this is a static list, when google add new domains you will not know about it so may have to do a manual check every now and then to detect new domains and add them into the ruleset.

To see how many times the RPZ rules are being fired, you can enable RPZ logging in the Grid DNS Properties:

rpz enable logging

You can view the RPZ logs by going to Administration->Logs->Syslog and selecting the “RPZ Incident Logs” quick filter:

rpz view log

The logs show you the source IP address of the query and the name that was queried. The data is in CEF (Common Event Format) format, which means you can redirect the syslog output to a SIEM product like ArcSight or Splunk and process the data with complex search algorithms. Additionally Infoblox have a reporting solution that can also provide comprehensive reports.

Something else you will also need to do is check for google DNS queries that do not fire an RPZ rule – kids are clever and if there is a google FQDN that still gets them to the unfiltered search page, they will find it! Unfortunately turning on querylogging can have a big impact on performance so you need to find a solution that either sniffs DNS packets at wire level or you could try the Infoblox Reporting solution.

If you have made it this far, here is a little treat for you! You can download the CSV file containing all the google RPZ rules I created earlier here: rpz-for-googlesafesearch-20151104.csv. Import this into your RPZ ruleset and it will save you doing all the manipulation mentioned in this post. At the time of writing (4th Nov 2015) it contains 193 google domains, it will be interesting to see how big the list of domains grows over the coming years.

The post Configuring Google SafeSearch with Infoblox DNS Firewall appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Using Infoblox DHCP failover

$
0
0

eccv-2011Infoblox DHCP is based upon ISC DHCP with a few tweaks here and there. The DHCP failover mechanism that it employs started as a relatively simple 14 page IETF draft proposal (available here) that was implemented in Alcatel-Lucent VitalQIP (then Quadritek, the authors of the draft). Over a period of time, the draft was reviewed, revised and extended until it reached 133 pages (available here).

I have had many arguments with my peers over the years about the complexity of the current draft (note, it is still not a full RFC) and whether it is strictly necessary (note, VitalQIP still uses the original simple draft proposal), but having deployed quite a few Infoblox systems now I have come to the conclusion that it actually does work pretty well. (If you really want to read something that will blow your mind, take a look at this formal perspective to the DHCP failover protocol).

However, if certain things are not configured correctly, operational issues can occur, and this is when it helps to have an understanding of how the DHCP servers function when configured in a failover scenario and the various “states” that the DHCP servers can enter.

Pre-requisites
Firstly, there are a couple of things that should be checked to ensure a reliable DHCP failover service:

  1. Ensure your DHCP members’ clocks are synchronized using NTP. This is incredibly easy to achieve by configuring the grid master with at least 3 NTP servers. You can use publicly available NTP servers on the Internet, or you can use your own, or a mixture. If you need to provide an NTP time source, we sell suitable NTP appliances that receive their time signal from a GPS satellite or long-wave radio signal here. Servers in a DHCP failover association must have their time synchronized, if they are more than 60 seconds apart, then the failover association will not function normally and you will encounter problems that can cause a serious outage.
  2. Ensure all IP helper addresses/DHCP relay agents point to the IP addresses of BOTH DHCP servers. Do not think you can get away with just using one, you will experience problems.

Configure the Failover associations
We’ve already mentioned failover associations but not explained what they are.

Simply put, a failover association defines the relationship between a pair of DHCP servers. You can have multiple associations, such as discrete pairs of servers, or you can arrange them in a star/hub and spoke relationship, with one or more servers acting as a central secondary, or peer. Technically speaking we don’t really have primary and secondary servers, they are just known as peers, but sometimes you do see references to a “backup” server, which really just means “the other peer”.

dhcp failover association

You can configure the load balancing value between the servers to determine which server handles more IP address requests: The default is a 50/50 split, so each peer will respond to requests on a (roughly) equal basis. This works fine for most deployments. However one scenario where you may wish to change this ratio is in a hub and spoke deployment, where the primary server is at a remote site, being backed up by a central secondary server. In this case, you would configure the primary server to handle the majority of requests.

dhcp failover load balancing

The failover association does have some timers associated with it, but the default settings normally work quite well so it’s only necessary to tweak these in specific circumstances.

dhcp failover timers

The timers are described below:

  • Max Response Delay Before Failover(s)
    The number of seconds that a peer can go without receiving a message from its peer before assuming the connection has failed and going into a failover state.
  • Max Number of Unacked Updates
    The number of unacked packets a peer can send before a failover occurs.
  • Max Client Lead Time(s)
    The initial lease time handed to a client, and the lease time given to clients in a failover situation. This also determines how long it takes for a peer to recover all IP addresses after moving to the ‘partner down’ state. The MCLT is a trade-off between lease time (so higher server load) and recovery time. The default of one hour is normally sufficient, but I have experienced situations where a shorter MCLT would have been beneficial, and in fact Bluecat takes this approach by using a default value of 30 minutes. A shorter MCLT can help speed up divergent servers after a major outage but will cause additional server load.
  • Max Load Balancing Delay(s)
    Typically both peers will receive DHCP broadcast traffic from clients, but only one peer will respond (based on a hash algorithm of the clients MAC address). If the peer that is not expected to respond continues to see client DHCPDISCOVERs (maybe the client is retransmitting because it did not received a DHCPOFFER from the other peer) it can disable load balancing and respond to the client with a DHCPOFFER packet of its own.
    NOTE: This function is dependent on the client setting the ‘elapsed time’ value in its DHCP packet correctly. If the client doesn’t set this value (i.e. the elapsed time value is always 0), then the peer that is not expected to respond will remain silent and the client will not receive a lease. This behaviour has been seen on Linux clients due a bug in the dhcpclient component.

Once a failover association has been defined in the Infoblox GUI, then DHCP ranges can be assigned to it and addresses will be divided up between the two peers. One thing to note is that both DHCP servers need to be assigned to the network, this ensures that fixed addresses get assigned to both peers (fixed addresses do not participate in the failover mechanism).

The DHCP service will need to be started on both peers, as it is not running by default.

Normal operation
The failover association status can be monitored in the Infoblox GUI, either on the dashboard (using the ‘failover association status’ widget) or in the DHCP panel. The status panel should be green with both peers reporting the status as ‘Normal’.

dhcp failover status

Syslog is a good place to monitor DHCP failover associations. As mentioned earlier, normally only one peer will respond to a clients DHCPDISCOVER broadcast. You will see messages similar to the following on the peer that didn’t respond:

dhcpd[32190]: debug DHCPDISCOVER from 00:0c:29:34:a6:d0 via eth1 : load balance to peer BAS-THE(1447370032p)
dhcpd[32190]: debug DHCPREQUEST for 192.168.0.111 (192.168.0.63) from 00:0c:29:34:a6:d0 via eth1 : lease owned by peer

Also, filtering by the phrase ‘failover peer’ gives a history of the different failover association states:

dhcpd[1802]: info failover peer BAS-THE(1447370032p): I move from normal to communications-interrupted
dhcpd[1802]: info failover peer BAS-THE(1447370032p): peer moves from normal to normal
dhcpd[1802]: info failover peer BAS-THE(1447370032p): I move from communications-interrupted to normal

Monitoring leases
When a DHCP client gets a new IP address, its lease time is that of the MCLT value (typically 1 hour). When the client renews the IP at the T1 timer (usually 50% of the lease time), it will get the lease time applicable to that scope (e.g. 7 days). The initial 1 hour lease is given out because a DHCP peer can only allocate a lease for a limited amount of time (MCLT) until its peer ‘knows’ about the lease.

The ‘Current Leases’ view can be used to view information about current leases. When using DHCP failover, there will be two entries displayed for each IP address as both peers should have information about the lease.

dhcp lease viewer

The lease ‘State’ column provides information about the current state of each lease. The usual states are:

  • ACTIVE: The address is currently in use by a client.
  • FREE: The address is available for lease, and will be handed out by the primary peer.
  • BACKUP: The address is available for lease, and will be handed out by the secondary peer.

Other states that may be seen are:

  • ABANDONED: The server received a ping response when checking if the lease is free prior to leasing, and therefore won’t offer it to a client. This typically occurs when someone has configured a static IP on their system that is part of a DHCP range. Abandoned leases are often misunderstood, especially the circumstances that lead to their reclamation by the DHCP server, so we have a whole article dedicated to the subject of abandoned leases here.
  • EXPIRED: The address is no longer used by a client, but is still bound to that client (meaning that the client will “probably” receive that address again if it asks for a new lease).

During a ‘Normal’ state, the pool of addresses available to lease by the primary and secondary peers may get out of balance. The DHCP service will periodically go through a pool rebalancing process to balance the number of free addresses on each peer. You will see messages similar to the following in syslog:

dhcpd[4379]: info balancing pool c016c0 192.168.0.0/24 total 10 free 6 backup 2 lts 2 max-own (+/-)1
dhcpd[4379]: info balanced pool c016c0 192.168.0.0/24 total 10 free 5 backup 3 lts 1 max-misbal 1

These message show the “total” dhcp addresses for this range, the available addresses (free on the primary, backup on the secondary) and ‘lts’, which means ‘leases to send’ (from the primary to the secondary or vice versa).

Abnormal operation
When an event occurs to change the state of the failover association, it is useful to understand how the DHCP service will function during these different states. You can then plan the appropriate action to minimize the operational impact.

Each DHCP peer within a failover association can be in a number of states, as illustrated by the following, rather confusing, diagram:

State: ‘Communications-Interrupted’
When the communication traffic between the two failover peers stops, the failover peer(s) enter the ‘Communications-Interrupted’ state. Depending upon the nature of the failure, the Infoblox GUI will show one or both peers entering this state (i.e. the grid master might be able to communicate with both peers, but the peers themselves can’t communicate with each other). The failover association status will identify which peer is still contactable by the grid master, but will change it’s state to ‘Communications-Interrupted’, the other peer may simply be listed as ‘Unknown’ if the grid master has lost communication with it.

dhcp failover detail

A few events that may cause this state are:

  • One of the peers failing.
  • A network issue is preventing the two peers from communicating (but both peers may still be seeing other DHCP traffic).
  • The peers are not synchronized with the same time.

During the ‘Communications-Interrupted’ state, peer(s) will only allocate or extend leases for the MCLT period, e.g. 1 hour, as they cannot update their partner with lease information. This means that the servers will be busier than ‘Normal’.

A few things to note about the ‘Communications-Interrupted’ state:

  • A peer will only issue NEW leases from its pool of available leases (FREE on the primary peer, BACKUP on the secondary).
  • If one of the peers has failed, the surviving peer will not start issuing out the leases that were available on the failed peer. Once it has exhausted it own pool of FREE or BACKUP addresses, it will no longer be able to issue any new leases. It’s at this point that an outage will be detected by normal users as new clients (e.g. laptop, mobile or wireless clients) will not be able to connect to the network.
  • A peer can renew any lease currently in use, whether it leased it out originally or the partner peer leased it.
  • Once a lease end time has passed (the client did not renew the address and the lease expiry timer has passed), it will not go back into the pool of available leases. This could lead to ‘lease exhaustion’ on networks where there is a reasonable client turnover, if the failover association is left in the ‘Communications-Interrupted’ state for an extended period of time.

Once alerted to a failover association going into the ‘Communications-Interrupted’ state, an assessment should be made as to how long before this can be recovered. If it is a network outage with a quick fix, you may want to leave it in ‘Communications-Interrupted’ until the issue is resolved. If it is a little more terminal, then you should consider moving to the ‘Partner-Down’ state.

State: ‘Partner-Down’
If the problem that triggered the ‘Communications-Interrupted’ state is going to take a while to resolve then putting a peer into ‘Partner-Down’ should be considered. ‘Partner-Down’ is an administratively entered state, whereby an Infoblox admin will put one peer into ‘Partner-Down’ using the Infoblox GUI. Note, it is important to understand which peer you select when putting it into ‘Partner-Down’ mode, make sure you select the peer that is still up and running, you are telling it that it’s partner is down:

dhcp set partner down

Once the peer enters this state, it will reclaim all leases that belonged to its peer:

  • Available leases (FREE or BACKUP) will be reclaimed after MCLT plus STOS (Start Time Of State) has passed.
  • Leases in other states (ACTIVE, EXPIRED, RELEASED) will be reclaimed after the potential expiry timer, plus MCLT has passed (if this is later than MCLT + STOS)

So, it will still take at least an hour (if the MCLT is set to the default value) before the peer in ‘Partner-Down’ can start to lease out addresses that were owned by its peer. This one hour period could seem like an eternity in a outage situation, so there may be justification for using a shorter MCLT when defining the failover association. It’s difficult to appreciate the importance of this setting unless you experience an outage.

There is one really important factor to consider before putting a peer into ‘Partner-Down’ mode:

Before putting one peer into partner down, ensure that the DHCP service on the other peer is not running!

Why is this important? If both the peers are operational when you place one of them into partner-down mode, both servers may stop issuing leases for a period of time when communication is re-established, as they will probably go into the ‘Potential-Conflict’ state. This is a bad state to be in and may lead to a service outage.

How to stop the DHCP service on a peer
If one of the peers is down due to a box failure, then we can be reasonably confident that the DHCP service is no longer running on it.

However, if there is a network issue that is stopping failover communication traffic and you decide to go into partner down, then the DHCP service must be stopped first on one of the peers. If the grid member can still be managed from the grid master GUI, then the service can be stopped using the GUI. If not, but SSH works to the peer, then either run the ‘set safemode’ CLI to disable DHCP (note that this also disables DNS) or shut the box down (which will need a visit from someone to switch it on again).

I’ve heard someone say “we could just disable the switch port” the peer is connected to. This won’t work as the DHCP service will still be running. Remember that the peers keep a log of the ‘Start Time Of State (STOS), so when both peers start talking again, they will check when they entered the last state and you could get a conflict.

Behaviour in ‘Partner-Down’
The surviving peer that was put into partner down will start the process of reclaiming addresses, once the relevant timers have expired. Once an address is reclaimed, the peer can now lease it out.

Whilst in the ‘Partner-Down’ state, lease times will be given out using the MCLT value (i.e. 1 hour).

Recovering from ‘Partner-Down’
Once the problem with the failed peer is resolved and is back online, it will go through a recovery process and then both peers should go back to the ‘Normal’ (green is good) state, issuing leases as normal.

The recovery process involves the two peers resynchronizing. Typically this would mean the failed peer waiting for MCLT to move to the ‘Normal’ state and start issuing leases, but Infoblox have tweaked the process a little to speed this up:

  • If no changes have been made in the grid, a “fast recovery” mode is used and a move to the ‘Normal’ state is made without waiting MCLT seconds.
  • Recovery is now done at the range level and not for the entire range of addresses.

State: ‘Potential-Conflict’
This is typically found when one peer has been put into partner down, whilst the other peer is still running. It is not a good state to be in!

Once the peers enter this state, they immediately stop serving DHCP addresses and go through a conflict resolution process which involves:

  • The secondary peer sending changed lease information to the primary peer. The primary then moves to the ‘Conflict-Done’ state.
  • The primary peer then sending changed lease information to the secondary peer. The secondary then moves to the ‘Conflict-Done’ state.

Once both peers have moved to the ‘Conflict-Done’ state, they will then both move to the ‘Normal’ state and are good to go. Note, the RFC says that the primary server can start issuing leases once it has moved to the ‘Conflict-Done’ state, but in this implementation it doesn’t – it waits for the secondary peer to transition to the ‘Conflict-Done’ state also.

That sounds easy enough, right? Let’s look at a real-world case of a ‘Potential-Conflict’ issue:

The secondary server sends lease information to the primary:
13:33:53 dhcpd: info failover peer IB-FO-Pair: peer moves from communications-interrupted to potential-conflict
13:33:53 dhcpd: info Update request from IB-FO-Pair: sending update (334 leases)
14:04:13 dhcpd: info failover peer IB-FO-Pair: 1000 leases sent
14:24:03 dhcpd: info failover peer IB-FO-Pair: 2000 leases sent
14:31:27 dhcpd: info Sent update done message to IB-FO-Pair
14:31:31 dhcpd: info failover peer IB-FO-Pair: peer moves from potential-conflict to conflict-done

-> This took about 58 minutes.
(Note: during recovery, a message is logged in syslog for every 1000 leases sent)

The primary server sends lease information to the secondary:
14:31:33 dhcpd: info Update request from IB-FO-Pair: sending update (1277 leases)
14:37:13 dhcpd: info failover peer IB-FO-Pair: 1000 leases sent
14:45:12 dhcpd: info failover peer IB-FO-Pair: 2000 leases sent
14:53:54 dhcpd: info failover peer IB-FO-Pair: 3000 leases sent
14:58:41 dhcpd: info failover peer IB-FO-Pair: 4000 leases sent
15:00:23 dhcpd: info failover peer IB-FO-Pair: 5000 leases sent
15:01:11 dhcpd: info failover peer IB-FO-Pair: 6000 leases sent
15:01:26 dhcpd: info failover peer IB-FO-Pair: 7000 leases sent
15:01:37 dhcpd: info failover peer IB-FO-Pair: 8000 leases sent
15:01:37 dhcpd: info Sent update done message to IB-FO-Pair

-> This took about 30 minutes.

So, there was no DHCP service for around 90 minutes. The previous lease time given to clients was 60 minutes (MCLT), so every DHCP client dropped off the network at some point.

The lesson here is not to get into a ‘Potential-Conflict’ state if you can help it. But if you do find yourself in it, then you probably want to go back into ‘Partner-Down’ (correctly) and then transition out again.

Force Recovery
One other option in the GUI is ‘Force Recovery’. This can be used when the primary and secondary peers are not synchronized. This puts the primary peer into the ‘Partner-Down’ state and the secondary server in the ‘Recover’ state. During a force recovery, all leases in the databases are resynchronized; the secondary peer will not serve any DHCP leases for at least the MCLT time (probably 1 hour) whilst resynchronizing with the primary.

This option should not be performed without due consideration, and you probably want to speak to your Infoblox support partner first before taking this action.

Wrap up

There is a lot of information to digest here! This article is based on experiences gained during a real-world DHCP outage that occurred at a large company several years ago. The root cause was an incorrect NTP configuration that caused the DHCP peers to lose synchronization because the clocks drifted too far apart. This meant that the peers entered communications interrupted state for an extended period, which led to lease exhaustion, ultimately resulting in clients not being able to access the network. The customer panicked, started rebooting appliances, disabling switch ports and putting peers into partner-down mode, and the DHCP peers ultimately entered ‘potential conflict’ state, which just made everything worse.

The moral of the story is that while DHCP failover is quite complex, it does work pretty well if it is set up and managed correctly. It’s easy to blame the protocol but most issues seem to be related to problems in the customers environment – just take a look at one of my rants here from a few years ago! Having an understanding of how it works is critical in this regard, and we hope that by publishing this article we can save someone from a potential DHCP outage through lack of understanding.

Feel free to post a comment if you have any thoughts or questions.

The post Using Infoblox DHCP failover appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Understanding Infoblox/ISC DHCP and “abandoned” leases

$
0
0

eccv-2011I have had several discussions lately relating to the recycling of abandoned leases in Infoblox DHCP (which is based upon ISC dhcpd). There seems to be a common misunderstanding about how the process works.

To recap, an abandoned lease occurs when the DHCP server encounters one of the following situations:

  • A client is attempting to obtain a new lease (via DHCPDISCOVER) and the DHCP server receives an ICMP echo reply (ping response) from the IP address that the DHCP server is attempting to offer
  • The client sent a DHCPDECLINE message to the DHCP server indicating that it is rejecting the acknowledged address (usually because the client received an ARP reply for the address it has been leased)

Abandoned leases give the DHCP server some knowledge about addresses within its DHCP ranges that are potentially occupied by other devices. It could be that someone has mistakenly hard-coded an address that conflicts with an address within a DHCP range (e.g. a printer), or the DHCP server may not have any knowledge of previous leases that have been issued if a migration is taking place from one DHCP server to another.

In the migration scenario, abandoned leases can occur when a new client comes onto the network before all the existing clients have had a chance to renew their leases, e.g. a laptop user might move onto a particular subnet and when the DHCP server attempts to find a new IP address it receives an ICMP echo reply because there is already a client using that lease (and that client hasn’t reached the 50% renewal interval yet so does not appear in the new DHCP servers’ lease database). So it is quite normal to see some abandoned leases during a migration and these can be easily cleared using the Infoblox GUI if required (although this is strictly not necessary).

However, it is not normal to expect an administrator to check for abandoned leases during normal use, so what happens if abandoned leases occur in a stable environment where no migration is occurring? It generally happens due to misbehaving clients – there have been reports of Apple devices using a “rapid DHCP” mechanism that can cause abandoned leases to occur, similar problems have been reported with Android clients, and also PC’s that offload their ARPs to the network card while asleep.

It is really not acceptable to expect an administrator to keep clearing out abandoned leases to ensure there are enough free addresses in the DHCP ranges for legitimate clients to use, so what happens when all the available leases have been consumed by abandoned leases?

What should happen is that when the DHCP server has run out of free/available leases, it will start to reclaim and recycle abandoned leases. So long as nothing is responding to the ICMP echo request (ping), the DHCP server will offer these abandoned leases out to clients. However, the DHCP server will still issue a ping to check that the abandoned lease is indeed free, and if it receives a response, it still cannot offer it. So if every abandoned lease responds to a ping, then the DHCP server will not allocate them, and this is when you will find that clients cannot obtain a lease (people start complaining they can’t access the network). Your solution is either to expand the DHCP range or try and shut down the clients that are responding to the pings. If you find that the culprits are misbehaving clients, then you may have to investigate fixes for them, either by applying patches or simply by banning them from the network (you can use MAC address or vendor class filters on Infoblox to create a “deny” list to stop these clients accessing the network).

In the following screen shot you can see that the DHCP server successfully reclaims an abandoned lease (192.168.0.181) in response to a DHCPDISCOVER and offers it to a new client:

dhcp abandoned lease reclaim

Something else to note, whilst doing a migration, is that if a legitimate clients’ lease does get abandoned because the lease database has not been populated yet, when that client renews at 50% of the lease time, the DHCP server will reclaim the abandoned IP and will reply with a DHCPACK – remember, because the client sends a DHCPREQUEST at lease renewal time and not a DHCPDISCOVER, there is no ping check involved, so the address won’t get abandoned again simply because the legitimate client is responding to pings.

You can see this behaviour in the following screen shot, the abandoned lease on 192.168.0.181 was reclaimed when a DHCPREQUEST was received:

dhcp abandoned lease request

There is a useful ISC DHCP PDF document available here that might be worth a read if you have an interest in DHCP.

The post Understanding Infoblox/ISC DHCP and “abandoned” leases appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.


An update on recent DNS & DHCP vulnerabilities

$
0
0

There have been several DNS and DHCP vulnerabilities published recently. All the main DDI vendors have now released patches as far as we can tell. Two BIND vulnerabilities in particular are serious enough to justify patching your systems. For Infoblox customers, this means an upgrade to NIOS 7.2.5, this will address the following vulnerabilities:

CVE-2015-8704: A DNS server could exit due to an INSIST failure in apl_42.c when performing certain string formatting operations. Examples included but might not be limited to the following:

  • Slaves using text-format db files could be vulnerable if receiving a malformed record in a zone transfer from their masters.
  • Masters using text-format db files could be vulnerable if they accepted a malformed record in a DDNS update message.
  • Recursive resolvers were potentially vulnerable when logging, if they were fed a deliberately malformed record by a malicious server.
  • A server which had cached a specially constructed record could encounter this condition while performing ‘rndc dumpdb’.

CVE-2015-8705: In some versions of BIND, an error could occur when data that had been received in a resource record was formatted to text during debug logging. Depending on the version in which this occurred, the error could cause either a REQUIRE assertion failure in buffer.c or an unpredictable crash (e.g. segmentation fault or other termination). This issue could affect both authoritative and recursive servers if they were performing debug logging.

There has also been a serious DHCP vulnerability published, CVE-2015-8605, fortunately Infoblox tell us their DHCP server is not vulnerable so no action is required…

Parameters that are vulnerable to CVE-2015-8605 are not defined, nor are they in use by Infoblox NIOS.  Therefore Infoblox DHCP servers are not vulnerable.

The post An update on recent DNS & DHCP vulnerabilities appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Calleva Networks announces the launch of D/R Application Switcher (DRAS)

$
0
0

dras banner-1000w-with-textlogo-v1Demonstrating an organizations ability to invoke a disaster recovery plan is a regulatory requirement for certain institutions and needs to be performed on a regular basis.

Most organizations use the Domain Name System (DNS) to facilitate their Disaster Recovery plans.  DNS is used to route clients to “online” servers – it is already part of the infrastructure, and well understood.

However, reconfiguring DNS so that clients are redirected to the D/R datacenter instead of the production datacenter (and back again) is a labour intensive error-prone task, especially when there are hundreds of applications that need switching. Even a commercial IPAM solution only reduces the technical complexity involved, it does not remove the need to actually modify DNS records and errors can still be introduced.

Calleva Networks is proud to announce the release of D/R Application Switcher (DRAS) to help organizations streamline their business continuity plans by automating the switching of large volumes of DNS records.

Paul Roberts, CEO of Calleva Networks said, “This is our first major home-grown application and was built because we have seen large organizations struggle with the challenge of manually switching hundreds of DNS records when a business continuity plan needs to be invoked, either for testing or in a real-world D/R scenario. Manually reconfiguring DNS is an onerous, error-prone task, and we believe we can help organizations reduce errors and achieve their Recovery Time Objective (RTO) by removing many of the manual steps that are usually required.”

“We are initially targeting Infoblox customers globally, and DRAS works right out of the box with Infoblox by leveraging the rich API’s that Infoblox provides. However we will add support for other DDI platforms dependent upon demand”.

For more information, please click here.

The post Calleva Networks announces the launch of D/R Application Switcher (DRAS) appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow

$
0
0

It’s been a torrid few months for BIND with various vulnerabilities and fixes published. This demonstrates the need to implement a robust patching schedule and it may make sense to reserve slots in your change control process to enable systems, like DNS servers, to be kept up to date with the latest security fixes.

However it’s not just BIND that suffers from these issues, sometimes issues are found in the underlying operating system or a component library, and a vulnerability was recently found in the glibc library, which contains routines used by the DNS resolver on many Linux based systems, including many home based broadband routers.

The vulnerability has received the identifier CVE-2015-7547 and can only be successfully mitigated by patching the glibc library. An interesting post from Cloudflare goes into some detail about the vulnerability.

Infoblox have released new versions of NIOS to address this vulnerability as follows: 6.12.16, 7.1.10, 7.2.6 and 7.3.2. We suggest you upgrade, especially if you haven’t done so for a few months, as there have been some other vulnerabilities that will also get fixed.

The post CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

CVE-2016-1285, CVE-2016-1286 and CVE-2016-2088 vulnerabilities

$
0
0

malware-laptopJust a note that more vulnerabilities have been discovered that will require another round of patching. Infoblox have released a new version of NIOS to address these and other vendors are publishing patches as I write this. The CVE’s are summarised below:

CVE20162088: A response containing multiple DNS cookies causes servers with cookie support enabled to exit with an assertion failure in resolver.c.

NIOS is not vulnerable because DNS cookie support is not enabled in the NIOS version of BIND.

CVE2016-1285: A defect in the control channel input handling could cause the DNS service to fail due to an assertion failure in sexpr.c or alist.c when a malformed packet was sent to the control channel.

NIOS is not vulnerable because the DNS control channel is not enabled in NIOS.

CVE20161286: An attacker who controlled a server to make a deliberately chosen query to generate a response that contained RRSIGs for DNAME records could cause the DNS service to fail due to an assertion failure in resolver .c or db.c, resulting in a denial of service to clients.

NIOS is vulnerable to this CVE and new versions of NIOS are available to address this.

To avoid this vulnerability defect, Infoblox strongly recommends that customers upgrade all NIOS DNS servers to the following NIOS releases: NIOS 6.12.17, 7.1.11, 7.2.7, or 7.3.3.

The post CVE-2016-1285, CVE-2016-1286 and CVE-2016-2088 vulnerabilities appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Silver Peak Named A 2016 Gartner Magic Quadrant Leader

$
0
0

Silver_Peak_4_Color-300w

Silver Peak has been named a leader in Gartner’s 2016 Magic Quadrant for WAN Optimisation, and in comparison to 2015 moves ahead of Riverbed on the “Completeness of Vision” axis:

silver_peak_mq_2016

Gartner said:

Silver Peak continues to evolve its strategic focus, which is well-aligned with the emerging market requirements for SD-WAN and the blending of SD-WAN and WOC capabilities. This includes a strong focus on data-center-to-data-center storage replication, aggressively championing virtual-appliance-based solutions in both the data center and branch office, and now focusing on a unified WAN optimization and SD-WAN solution to support new integrated WAN architectures.

Silver Peak now supports two product lines on a common software base; the traditional WAN optimization via its NX and VX product families, and the SD-WAN Unity EdgeConnect, which also includes the optional Unity Boost for full WAN optimization support. The Unity Orchestrator offers centralized policy-based network orchestration for both product lines. The Silver Peak Unity solution includes the Cloud Intelligence and Advanced Exterior WAN routing, which optimizes external SaaS applications, and monitors and maintains performance metrics for more than 50 SaaS applications.

Silver Peak continues to evolve its security capabilities for internet connectivity, and to support meshed IPsec tunneling, full SSL proxy, enhanced WAN path control, basic NAT and basic firewall screening, and integration with Zscaler. Consider Silver Peak for all branch office optimization needs, data center replication needs, and for hybrid WAN and SD-WAN.

Additionally Gartner said:

Silver Peak offers a very strong solution for optimizing data center storage replication, with segment-leading products and good strategic and go-to-market alliances with data center infrastructure companies, such as VMware, EMC, Hitachi and Dell.

Please complete the form below to view your complimentary report, and learn why Gartner named Silver Peak a WAN Optimisation Magic Quadrant leader:

First Name *
Last Name *
Company *
Phone No. *
Email Address *
* indicates required field.

 

The post Silver Peak Named A 2016 Gartner Magic Quadrant Leader appeared first on DNS, DHCP, IPAM (IP Address Management) | Cloud Wi-Fi | SD-WAN | Calleva Networks.

Viewing all 62 articles
Browse latest View live