Identifying which zone a subnet is in on a Palo Alto firewall – Script

One of the challenges with managing any zone based firewall on a large scale is knowing which zone everything is in. We all know that the network should be well documented, but we also know that routing tables get unwieldy, and it’s not uncommon when adding a firewall rule to be wondering exactly which zone that source or destination is in.

There are three ways to find the zone:

The GUI Way

Logging in to the web interface of the gateway – either by changing contexts from Panorama or going direct, you can navigate to Network -> Virtual Routers. From this page select the relevant virtual router and click “More Runtime Stats”. Find a route matching the destination, and look up what the egress interface is. Close this window, and navigate to Network -> Interfaces. Find the egress interface you just noted down, and view what zone it is in.

The CLI Way

It can be slightly quicker (if you remember the commands) to SSH to the gateway and run a test against the routing table. This can be easier, because in the GUI you have to find yourself the matching route – with the test, it will tell you which route it matches.

The commands are:

test routing fib-lookup virtual-router <virtualrouter> ip <IP>

This will give you an output like this:

admin@PA-VM> test routing fib-lookup virtual-router default ip
runtime route lookup
virtual-router: default
via interface ethernet1/24.20, source, metric 65434

This clearly tells you the exit interface for the route – you then run a show interfaces command to see the zone for the route:

admin@PA-VM> show interface ethernet1/24.20 | match Zone
Zone: outside, virtual system: vsys1

So now I know the correct zone for that rule is “outside”. And it doesn’t matter whether that’s source or destination – as long as your routing is synchronous it’s the same procedure for identifying a source zone, you just look at how you’d get back to the source to figure out where it comes from.

Using the API

Following exactly the same procedure as the CLI, the same result can be achieved using the XML API.

You have to send these two requests:


And…..who wants to remember all that?

But using the API means we can script it – so here it is.

Take a look at here:

This little script runs on python and takes 3 inputs:

  1. virtualrouter – the name of the virtual router you want to do the routing lookup in
  2. ipaddr – the IP address you want to look up
  3. credentialfile – the credential file.

The credential file is simply a text file with the device name or IP on line 1, and the API key on line 2. There’s another script in the same repo called “” which will create that for you interactively.

I often put handy little scripts like this on a central VM somewhere, somewhere that the whole team can use it to make their lives just that little bit more simple.

Goodbye, Hello has been up for a few years now, with it’s first post been made back in 2013. It has a reasonable number of unique daily visitors – a lot more than I ever thought it would have.

A lot has changed since then, not least which I’ve set up a business to offer consultancy in networks and network security. To help me to promote my business, I’ve decided to incorporate my blog into my company website. So here it is. Available at, it has all the old posts included (and I’ve set up a redirect from the old site for each post too, so your bookmarks will still work!).

I hope that the posts you find on here can still be helpful, and I’m definitely hoping to find the time for more updates.


Scout2 and Security Monkey – AWS Security Auditing

I recently had cause to do some auditing of a pre-built AWS environment. The lazy guy in me tried out some free tools to speed things up.

Security Monkey

Image result for security monkeyFirst up was Security Monkey. This was made by Netflix. It can be found on their github: It’s actually really well documented, and I just followed their setup guide verbatim – I had a working setup in about half an hour, and good visibility of the AWS account and some suggested vulnerabilities.

Whilst it is definitely useful for a one time audit, it seems to excel at continuous monitoring. With Security Monkey you have the ability to add comments and justifications to issues it found – so if you have something it considers a flaw, but you’re aware and have mitigated in other ways, you can justify it and it will stop moaning. It re-scans the environment every 15 minutes or something, so it keeps you constantly up to date as changes are made.

The detail you can find in Security Monkey is great – but I didn’t find the UI all that intuitive. It took me a while to figure out how to navigate around and find what I was looking for. Once I got used to it it was perfectly fine though.

Scout 2

Image result for aws scout 2Next I tried Scout 2 – not because Security Monkey was inadequate, I just wanted a second opinion.

This can also be found on github –

I didn’t find their documentation quite so detailed as Security Monkey, so I took some notes as I set it up – the basic order of events was:

  1. Create a policy called Scout2 in IAM in the account you are going to be interrogating. Make up your own if you like, or just use the one provided in the github –
  2. Create a user in the same account, with access type programmatic. It won’t need management console access. Save the keys somewhere safe.
  3. Attach the policy to the account.
  4. Stand up an EC2 – I just used a t2.micro using the amazon image, it was plenty sufficient. It can be in any account – it’s just going to use the API to access stuff. You don’t even need an EC2 to be honest, you can just use the AWS API on your local machine. Use a decent security group, allowing http and ssh from your IP.
  5. Run through the following sequence on the EC2:
    1. sudo apt-get update
    2. sudo apt-get install python-pip
    3. pip install scout2
    4. echo “accesskey,secretacccesskey” >> ~/creds.csv
    5. Scout2 –csv-credentials creds.csv –regions eu-west-1 (obviously change the regions as you require)
    6. sudo apt-get install apache2
    7. sudo systemctl start apache2
    8. sudo mv scout2-report/ /var/www/html/

At this point you should be able to browse to the server on http and see the report.

I quite liked the view in Scout2. It picked up a few different things to Security Monkey, and presented it in a different format.

Both of these products were decent – they both have their uses and I would definitely use them both for auditing again in the future. Obviously you still need some common sense and knowledge to review what it’s presenting you, but for a quick starter for ten it saves a lot of digging around the AWS dashboard to review all the configurations.

There are plenty of other tools on Google to be found and they all have their merits – there are also a number of useful tools provided by AWS themselves which aren’t to be dismissed.

Automated Deployments of Palo Alto Firewalls in AWS

I’ve recently been working with a client on magically spinning up entire environments in AWS. This means I’ve learned a fair bit about AWS on the way!

Without going into too much detail (as it’s the clients work), we have been bootstrapping Palo Alto firewalls. This allows you to be able to stand up a fully configured Palo Alto firewall using a CloudFormation script in AWS, in a matter of minutes. That’s pretty cool.

Palo Alto are pretty helpful with this – they provide a decent sample here:

From this, you can amend the scripts as appropriate to fit into your own environment – this method does rely on having a full configuration for the firewall available to bootstrap from available on an S3 bucket. If this is static, then easy. If not, you’ll have to do some magic elsewhere before calling the CloudFormation script to make sure the config you need is in the bucket.

One of the challenges we faced was that there is an interface limit (depending on which EC2 instance type you choose). This means that the example from Palo Alto does not scale too well – if you have too many subnets, it becomes impossible to put a Palo interface in every subnet. To get around this, you can add routes in the routing tables pointing to the ENI’s (Elastic Network Interfaces) of the Palo. This means you can have multiple subnets behind one interface.

eBGP – ECMP in depth!

My client recently did a fairly big change to the edge network in their data centre, including a migration to 4-byte AS numbers. This wasn’t without it’s challenges. So here is a (long) post about the challenges we faced, and some explanations of some of the more advanced features of BGP such as local-as no-prepend replace-as, and bestpath as-path multipath-relax.

Here is a very simplified version of the topology, post change – everything is fictional. The two ISP’s provide a private MPLS.

Continue reading

Testing a 1 Gb Internet circuit

Have you ever needed to prove a gigabit Internet circuit? It’s more of a headache than you’d think. I had to prove one recently – we were seeing some errors which seemed to happen every time the bandwidth went over about 400mbps outbound, so we needed to prove we could push more. We could ask the ISP to run some tests – but I’m an untrusting kinda person. Plus, those tests wouldn’t include some of the internal infrastructure which we also wanted to prove.

Download is easy. Get a bunch of users to download the CentOS Everything ISO (or anything else that’s a few gig), and watch it get hammered.

Upload is trickier. It’s hard to push that much data, without somewhere to push it to – you need to know the remote end can handle a gig, and also the throughput is affected by latency (it is on the download too, but that’s easier to max out by just getting more users to download).

So, I stood up 10 servers on Digital Ocean. I installed CentOS, and configured vsftpd – plenty of guides on the internet. I then downloaded the CentOS ISO to 10 laptops or servers in the organisation (see – Download is easy :o)), and shouted 3, 2, 1, GO and everyone clicked upload in FileZilla.

It worked really well – we smashed the Internet, and whilst we didn’t quite hit a gig, we got close – close enough to be confident we could push it more than we do day to day.

And….it cost me…. $0.10. Ten…cents…even with the crappy exchange rate today, I can stretch to that. As soon as we finished I destroyed the servers – they were up for less than an hour.

So – we can definitely get more throughput than we thought (sorry for the blame, ISP!). Best we go figure out what the real problem is then.

PS: I have no ties or links to Digital Ocean – I just picked them because they had a DC in the UK and it was simple and cheap. I also did check their ToS – I’m not a lawyer, but as I wasn’t out to break their service I don’t think I did anything wrong – if I did and anyone at Digital Ocean is upset then I’m very, very sorry and I’ll gladly delete my account and star out your name in this post! :o)

PPS: To be transparent…the two Digital Ocean links above are referral links, which give you ten dollars free credit, and earn me 25 dollars credit if you spend 25 dollars with them. If you don’t like referral stuff, here’s a plain link:

VCP 6 passed – like the new Fault Tolerance features!

I recently updated my VMware certification from 5.5 to 6. My 5.5 was expiring so it made sense to do the delta exam and upgrade, rather than recertify the same level. I realise I’ve done this just as 6.5 is coming out, but I’ve been using 6 lately so it made sense to me.

A lot of the maximums in VMware have been increased, and a good summary of that is available here:

One of the areas of most interest to me was the big improvements in VMware Fault Tolerance (FT). A couple of years ago I was investigating multiple options for a high availability (HA) VPN solution, and looked into using the CSR1000v to terminate the VPN’s. The idea was to have one Cisco device and let VMware FT handle the resilience. The advantage of this would have been only purchasing and licensing one CSR, and not having to worry about any kind of stateful IPSEC synchronising between two devices. One of the main issues we had with this was that to get decent performance out of the CSR we needed multiple CPU’s, and FT wouldn’t support it. In version 6 it is now capable of up to 4 vCPU’s. This improvement has potentially made the solution worth exploring again if I ever needed to.

Obviously there are a whole host of other differences, but there are hundreds of other sites out there to review them all on so use Google!

Thanks to Keith Barker ( at CBT Nuggets ( for the useful videos!

Off-site backups for Synology NAS – using two raspberry pi’s, behind dynamic NAT IP’s

I recently bought a 4 bay synology NAS (DS416 Play), to move away from Dropbox and OneDrive. The main issue I had before choosing to do this, was off-site backups. It’s ok having 4 disks for resilience, but if my house burns down or gets burgled, I still lose everything.

So I started to think up ways of doing an offsite backup, without having to remember to do it or drive around with hard disks. I came up with the idea of putting a Raspberry Pi at a family members house with an external drive attached, and rsync’ing to it. If I do this in the middle of the night, then it won’t noticeably interfere with internet connections.

The main issue is that family members don’t have static IP’s (they are behind typical ISP routers with dynamic IP and NAT), and the synology makes an outbound connection to do the rsync. So I decided to use an intermediate server which I already have in the internet, and tunnel the rsync over a reverse SSH tunnel. Another stumbling block was synology trying to be too clever – initially I tried to set up a reverse tunnel on the NAS, but the HyperBackup software won’t let you back up to a local IP. For this reason I ended up with another Raspberry Pi next to the NAS, but I suppose you could use any device that’s always on. I could have cut out this second Raspberry Pi by going straight via the external box too, but I didn’t want to open the forwarded ports to the internet – using the second Pi, the forwarded port is only open to my LAN.

So the topology will end up looking like this:

NAS — LocalPi — InternetServer — RemotePi

Initially though, it’s best to set up this:

NAS — RemotePi

This lets you test that the rsync works, and also lets you do the initial sync (which might be a big one) on the LAN rather than uploading it to the internet over ADSL!

So, lets get started.Continue reading

Python Scripting on a Cisco Nexus 7k

A few days ago I stumbled upon the python interpreter on the Nexus platform. It got me to tinkering.

In the past, I have had a requirement to grab a list of all of the interfaces on a box, the IP’s, and the masks. The interfaces and IP’s can easily be obtained from a show ip int br, and using column select to grab the relevant columns (hold down Alt when you are selecting in putty, if you didn’t know that before then go try it!). Getting the subnet masks for those is a little less trivial though.

As a side note, in the past I’ve used this:

sh ip int | i is up|Internet add

This works, but you have to mess a little to strip out just the bits you want (not a lot of work with a decent text editor though, I admit).

Anyway, more just to see if I could, I wrote a python script to extract the structured dictionary response from a “show ip interface” and parse out the relevant pieces and print them into a fixed width table, under the columns ‘Name’,’IP’,’CIDR’,’Mask’,’Admin’,’Link’,’Protocol’.
Continue reading

Check Point Certified System Administrator (CCSA) Study Notes – R77


I’m now a Check Point Certified System Administrator (CCSA)! I took the R77 exam and passed. I have to say I was a little disappointed with the exam – there were 100 questions in 90 minutes, but I found a lot of the questions were repeated – albeit with a slightly different phrasing.

Below are the notes I made while I was studying. Definitely lab it (download the ISO and set up some VM’s – make use of the free eval mode!). There are a few questions that require familiarity with the UI’s, and knowing where to find certain config options. I’ve been working with Check Point pretty extensively for a couple of years now, and have worked on some pretty big upgrade projects and stuff, but I’m definitely glad I put some hours in studying and labbing – my experience hasn’t exposed me to all of the features and there is no way I would have passed it without putting some time in with a virtual lab.

Below are my notes that I took whilst studying. I haven’t edited them, it’s a straight copy and paste from OneNote, so they may not be formatted correctly or whatever, and contain some notes that might only make sense to me! But here they are anyway, they may be useful to someone.Continue reading