TL,DR; – Go to Installing Squid
Yum is a great package manager for CentOS that is the secret envy of every Windows system administrator on the planet, however there will come a time when you attempt a “yum update” or “yum install tcpdump” to find out there is a problem with internet access from your server.
90% of the time you’ll probably find a network issue or someones messed up the DNS resolver configuration, however in some instances the server will legitimately have no internet access and setting up this access is either not allowed or high innocent.
Recently I worked on a server with two network connections, one to the management network and another to a VoIP signalling/media network, in this setup the default gateway was configured via the VoIP network as that’s the mission critical services, all the management elements had static routes via the management interface gateway. The problem was the VoIP network was internal and had no internet access available where as the management network did. Placing a static route for every possible Yum repository and mirror obviously isn’t an option and neither was switching around the network configuration, so here comes the Proxy.
The concept of a proxy is fairly simple, we’re going to tell Yum that all of it’s traffic should be sent to a specific IP address on a specific port, this IP address will be on a server with internet access and will have the Squid proxy installed and listening on that port for inbound connections. Assuming the access lists on the proxy are configured correctly this will then route that traffic to the internet and back on behalf of the originating server, therefore giving the illusion of internet access for Yum, simple!
So you need to find a server on your network that has IP connectivity to the internet and to your other server that doesn’t have internet access, this is where the proxy (Squid) will reside.
First step use Yum to install the Squid application on this server, and then ensure that it’s going to start at boot.
yum -y install squid chkconfig squid on
Now you need to define which client IP addresses are permitted to use your proxy, in our case this range should include the IP of the client that doesn’t have internet access. So edit the squid configuration as below replacing the IP range as per your network.
nano /etc/squid/squid.conf acl allowed_clients_acl src 192.168.0.0/24 http_access allow allowed_clients_acl
Now restart the Squid service to apply the configuration changes:
service squid restart
It’s always worth checking that Squid is actually running and listening on the correct network port using netstat
netstat -lnutp | grep 3128 tcp 0 0 0.0.0.0:3128 0.0.0.0:* LISTEN 20653/(squid)
So our Squid proxy server should be working now, the next step is to actually configure the clients to use this server. Simply in the users (in this case root) bash profile were going to specific an environment variable that yum will pick up on, so edit that profile text file:
Then just paste in this line, replacing the IP address with your Squid server (you can also use a hostname).
Bingo – Try some yum commands on the server and you should be in business!
Any problems leave a question in the comments 🙂
The home and small business routers these days that us geeks would be interested in buying are shipping with SNMP server functionality built in as standard, and when their not there’s normally some way of breaking into the Busybox Linux distro (that most of them use) and installing some kind of SNMP daemon.
However there’s always cases where that options not available for some reason or another, in these cases you can use a setup like Kurt’s, where he decided to build a passive bandwidth monitor (even through the router in his pictures does support SNMP?!).
See a basic video of it in operation here:
The basic setup consists of a passive network tap; This is basically just a fancy way of saying that you’ve cut into the pairs of a Cat5e network cable and added in an extension of the pairs to your own device. The device that you add in should be doing nothing other than monitoring, so that it’s not transmitting any data on to the cable that would confuse the other two host which assume their are directly connected to each other with no other hosts on the network segment. The limitation of this setup is you need physical access to the cable, and due to the nature of high speed ethernet it would only work on 100Mbps connections or less.
The electronic brains behind the setup is a ENC624J600 chip to interface with the ethernet layer, chosen because of it’s raw ethernet functionality, this was connected up to an Atmega128 using the SPI interface which would run the core code to count packets and plot on the LED display.
To have a look at Kurt’s full write up on the project, head over to here.
We all remember an imaginative childhood and wanting to be an astronaut shooting through the galaxy, well for some children (adults?) that could almost become a reality!
Using a Raspberry Pi, Arduino board and some other electronic wizardry, Jeff High Smith has come up with this awesome full featured mission control desk, basically it’s a bunch of varying but satisfying switches, LED’s, speakers and screens that simulate a mission control desk and a realistic flight scenario.
It’s really something you need to watch to understand how awesome it is, sit back and click play on the below YouTube clip.
Now I’m sure you can’t wait to build your own, I mean one for your child of course! (adults wouldn’t be caught playing with this, would they!?), the whole build is documented on the Maker website with some more great pictures of the setup and helpful ideas on how to get going. Clicky here – http://makezine.com/video/making-fun-mission-control-desk/
Don’t forget we’re always wanting to hear about your projects, so leave a comment or pop up and eMail !
Just under a year ago my employer started off a network refresh program on one of our internal core networks, we’re a type of ISP and this network was our main management core, so all the fairly important traffic goes over here like billing stats, element management traffic (Telnet, SSH and RDP), alarms (Syslog, Corba, SNMP), plus the management traffic from NMS systems communicating with their respective network elements for user service provisioning.
As we’re a service provider the management core is actually very dynamic, because we’re constantly upgrading the network capacity, hardware and features while also normally decommissioning the legacy equipment. As someone working on these projects you often find yourself troubleshooting the core management connectivity more than customer services, good for the customers, but a pain when you real troubleshooting equipment isn’t targeted for this network and is invested in the revenue generating networks.
The original design
The topology was fairly simple about a year ago, each site generally had a multiple of two switches (Cisco Layer 2) to provide local element access into the network with resiliency, then two core firewalls also operating in layer 2 mode, finally there were two Core MPLS routers which connected all sites together via a VPRN/VRF Layer 3 routed service, and these were the Layer 3 gateway for all the element’s/VLAN’s at that site.
Our future plan
As goes with most network upgrades we all sat down and decided over a cup (or two) of coffee what we wanted to get out of this network refresh, management set aside a decent budget for the work, but there wasn’t going to be any drastic changes or upgrades.
Access – The access layer is fairly simple and that’s the way we wanted it to stay, the older switches got replaced to make all the site’s capable of gigabit access, and all the configuration went through an intensive cleanup exercise to get rid of redundant configuration, plus a tighten up of access security and a general sanity check (such as ensuring all the switches are running the same version of spanning tree! no names to be mentioned).
Firewalls – When looking at firewalls there was a general decision to push these up the OSI stack to layer 3, mainly because the MPLS core configuration was getting quite busy with sub interfaces, with this change we also wanted some kind of routing protocol into the core to keep everything fairly dynamic, please no more static routes!
Vendor selection started off tricky but there was a clear winner in the end; historically we were always using Cisco PIX/ASA’s however this time we looked at Palo Alto mainly for the added security features, a lot of the elements on this network run proprietary operating systems where you can’t install any third party software, so having a firewall that could perform anti virus and spot threats on the network (like brute force detection) was very desirable, and when the costing/performance turned out to be within ~4% of the Cisco alternative we couldn’t really say no.
Core – As already mentioned the main change here was removing all of the access interfaces from the configuration, and setting up a routing adjacency (OSPF) with the firewall which was now hosting all of the local gateway addresses for each VLAN. There we’re no changes to the hardware/vendor at this layer as there were no real requirements.
The unexpected benefit
The original plan was fairly simple, to keep the network in support by means of hardware upgrades and cleanup the design slightly by moving routing down one step to the firewalls, at the vendor selection phase we also realised that our security could be greatly improved by going with the Palo Alto firewalls.
However the biggest advantage by far has been the logs that these firewalls create and push back to their management server (Panorama).
By design the firewall will create a start and end log for every session that passes through the firewall, with this you get all of the basic information like IP addresses, timestamps, ports. However you also get more information like URL logging, accurate application identification, packet and byte counts for both Rx and Tx. Plus unlike pushing your logs to a syslog server the management server has really good reporting and searching functionality built in.
The network engineers have fallen in-love with these firewalls purely because of the logs; for example say you have a server (10.1.1.1) at Site A that’s having problems communicating with a server (192.168.2.2) at Site B, in our design we know this traffic flow will pass a minimum of two firewalls (one at the edge of each site). So you simply log onto the firewall management server and search for related sessions “ip.addr in 10.1.1.1 and ip.addr 192.168.2.2”, then as quick as a google search (we’ll may be a few seconds slower) you get a detailed list of all the traffic flows between them two addresses.
In the above example it was fairly simple to see that traffic made it all the way through our core network to the other side, then back into the MPLS core however it never hit the next firewall in the path (proven by a lack of Rx traffic on that first firewall), a quick look at the MPLS core found some old static routes that needed to be fixed, and 3 minuets later we’re back in action, no need for pings, trace routes, packet captures etc…!
Security that works
As mentioned a big initial driver for using these Palo Alto firewalls was the extra security, fortunately when we turned them on we didn’t find a raft of security threats already in the network, in-fact the only threats we really see are people (from the internet and normally in China) trying to breach our DMZ where one of these firewall pair’s sits.
We did once get a SSH brute force alarm from the internal network which the firewalls instantly managed to filter out, however a quick search found that it was from one of our employee laptops (VPN Client), a phone call found out that he was playing around with a python script that logged into one of our core routers to process the configuration for a up and coming network migration, however the wrong credentials in the script and some not so great coding (causing a login loop) triggered a brute force alarm on the firewalls.
It changed us!
We’re finding network engineers are actually preferring to put in firewalls now instead of “simple routers” because of the extra visibility that you can leverage from the network logs, and the security capabilities are blowing our customers minds when you can track security breaches down in a matter of seconds.
As always, comments are welcomed and appreciated!
So at PingBin we have decided to start doing a weekly round up of the great Raspberry Pi projects, the plan is to have one of these every week assuming you Pi hackers pump out enough cool projects that we can blog about, if you have any projects you would like to mention just pop up an eMail or Tweet.
Time to get started!
SnowBoarding HUD Pi Style
So first up is a great SnowBoarding HUD (Heads Up Display) from a guy named Chris, check out his Blog Post here. The Hack is basically using some MyVu Glasses and building them into a set of normal snowboarding goggles, the MyVu can interfaces directly with the Raspberry Pi’s normal video output interface (not HDMI) which makes the Python code a lot easier to write as it’s just a generic display.
Along with the Pi and Glasses there is also a battery back to provide enough power while throwing yourself down the slopes, and a GPS dongle which means using the power of python your going to get some great features such as, accurate speed, mapping routes, top speed etc…
All in all it’s a great project, however at £160 it’s certainly not the cheapest.
Need some help parking?
The next project again uses some nifty Python programming, this time Jeremy (blog) has used some Python with a webcam and small LCD display to measure and display distance readings to a user for help parking, it certainly better than them annoying beeps you get from most reversing sensors these days.
As with most great projects all of the coding is on his blog if you want to make your own or just have a play around, plus the distance measuring is also covered on another blog post by Jeremy which we have covered before. Keep up the great work and our only request would be to integrate that bell as well 🙂
Finally some news…
A bit off track here but this week Sony announced that they have created their 500,000th Raspberry PI board in their new factory at Wales, that’s actually doing a production rate of 40,000 Pi’s a week, now that’s a lot! At this rate we are expecting 1Million to be created some time in July. If you want to read more on that story click here.