NoahK's Journal


The Archives


Friday, June 12th

The Difference Between PERL and Ruby

---------------------------
Hello World in PERL (UNIX):
---------------------------

vi hello.pl

print "Hello World!\n";

perl hello.pl

---------------------------
Hello World in Ruby (UNIX):
--------------------------


cd /usr/src
mkdir ruby-install
cd ruby-install
wget ftp://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.1-p129.tar.gz
tar -xzf ruby-1.9.1-p129.tar.gz
./configure --prefix=/usr/local
make
vi Makefile
make
vi Makefile
make
make install
/usr/local/ruby
^C
cd ~
vi hello.ruby

puts "Hello World\n";

ruby hello.ruby


Friday, June 12th

William!

Seems like I only update this when we have another kid... oh wait. :)

My new son William was born very early on April 3rd, 2009 (yes, Wes's birthday). He was our smallest, but is a very calm kid. We are very tired, but very happy!


Monday, September 3rd

Audrey!

Yahoo!
Welcome to the spinning world! On Thursday, August 30th, Audrey Mae Korba was born, weighing in at 7lbs, 6oz (just an ounce smaller than her brother). We're finally home (came home Saturday), and we're slowly learning the balance of entertaining two children. :) Wes loves his little sister, and asks to hold her regularly! I hope we get some sleep!


Saturday, July 28th

You'd think I've been keeping this up

I thought it'd be funny to write a journal entry, and claim I've only been away for a week. Unfortunately, that date below happens to be from 2006. So yes, it's been over a year since I've written in this pile. Not a whole lot has changed, well, except for us expecting again in September (a girl this time), a promotion to "senior" (which means double the work, but well compensated, or so I hope), and not a whole lot of travel (few trips to Chicago, and a trip to Vegas coming up shortly). I've been working on the basement quite a bit, getting it ready to be our permanent office. I've already got the computer and server down here, along with most of the office upstairs (so we can start on Audrey's room), but have yet to drywall or insulate, or run low voltage. I'll get to that tomorrow.

Anyway, that's it for now. Here's to yet another promise to try to keep this updated!

Music: BT - Superfabulous

Project: Basement, of course


Friday, July 21st

NFS-SSL Read Performance

To follow-up with the original NFS-SSL performance tests, I performed a read test across the NFS system. Overall it performed well, experiencing about a 4% reduction in speed over plain NFS (1.58 MB/sec vs 1.65 MB/sec, or about a 70KB/sec difference). Combined user/kernel space degredation hovered about 5%, which was also expected.


Tuesday, July 18th

Diskless Linux

A common theme among IT professionals these days is going "diskless", or booting computers up without ever having to touch a drive. We don't really have a need for this in our development network, as many of our services (LDAP, mail, NFS, etc) are distributed among different servers. However, when we do A&Ps, its another story. I thought it'd be nice to not have dedicated computers to do scanning/etc from, but instead be able to pull a few servers from different areas and use them on-demand. Thus, the diskless project started. The overall goal was to either (for PXE) pull the boot loader off of a TFTP server, or provide the boot loader on CD/floppy for the computer to boot from. The boot loader would then grab it's kernel from a tftp server, boot it, and mount its root drives over NFS. Having the drives over nfs would also allow us to share resources - like scripts, programs and results - between all servers easily. They essentially share most binaries, and only have a specific partition that contains server-specific information. But how? I'll describe the PXE version of what I did, keeping in mind the only *real* change you'd have to make would be to make a ISO out of the stage1/2 boot loader and burn it to a CD.

Step 1: The boot loader
I chose GNU Grub 0.95 as a boot loader, primarily due to its seemless integration with Linux, and because it can be network aware. I obtained a few patches to add support for Tigon3 and UNDI cards, and then configured/compiled it. I then took the pxe stage 2 image and placed it into a folder on our tftp/dhcp/dns server, Nokia (which was a checkpoint firewall until its hard drive crashed).


Step 2: The DHCP/BOOTP server
The next step was to make a DHCP server that could answer the DHCP requests sent out by the ethernet card. I used static maps for DHCP so I could send server-specific boot information based on the MAC address (like what NFS mount it should use, etc). I defined the following for our test machine, "lablaptop":


host lablaptop {
hardware ethernet 00:11:25:B3:1B:F1;
fixed-address 10.1.1.2;
option option-150 "(nd)/menu.lst";
filename "//pxe/undi";
}


This defines the MAC address that'll be answered (F1), the IP it'll get, the GRUB menu list that'll be displayed post-stage2 (menu.lst) and the GRUB boot image that'll be piped down when the card requests an IP using PXE. This path is relative to the TFTP server root.


Step 3: The Kernel
The easiest part was the Kernel. Our configuration was pretty simple - support for most network cards, support for kernel-based NFS, and support for network booting. We used the 2.6 kernel for this, as its network booting capabilities are (imho) more refined than previous versions.

Step 4: NFS
An NFS server was created to host two mounts - the shared mount, which all servers can mount it, and the server mount, which is unique to each server. The layout was pretty simple:

Server: /tmp, /bin, /sbin and /lib (required programs/libs by init), some stuff in /etc, and everything in /var

Shared: Everything else

Each server was configured to mount the Server paritition as the root partition by the Kernel using GRUB boot parameters. The shared drive is mounted via fstab, during the init cycle.

Step 5: Testing
To test, we booted up our laptop, and told it to boot off of the network. Our card supported the Intel UNDI specification, so we told the DHCP server to point it to the UNDI Grub image. The laptop booted, downloaded the Grub image, and showed the menu. We selected our menu entry "lablaptop", and Grub downloaded the kernel and booted it. It then mounted its NFS mount, init ran, it mounted the shared drive and we got a login. How about that?


I could go into much more detail, and I'm considering making a page with the patched Grub source we used, the menu file, the dhcp config file, etc. If there's interest, let me know!


Tuesday, July 18th

NFS SSL Follow-up

So again I've failed at keeping this thing updated... work has (ghasp!) been busy, and with two weeks of vacation thrown in there, I've been a bit slow to getting back to this. But my revisitation with my NFS/SSL experiment yesterday reminded me I should update. So here it goes.

In my place of business, things kind of happen all-of-a-sudden. So during a conversation with a manager, the subject of encryption came up. How does this play into the NFS/SSL stuff I recently blogged about? When the manager asked about my skills, I brought up this experiment. I also remembered that I wanted to do some speed tests to see *exactly* how much overhead this causes for the processor. I only did writes across our network, but the results were pretty neat. On our 100 mbit network, we average about 1.66 MB/sec transfer rates across NFS. Force it across a SSH tunnel and we drop our rates to 1.62 MB/sec, or roughly 2%. Which. Is. Awesome. I was convinced the symmetric encryption thrown on top would bloat out our packets (which were controlled at 1024K bursts), but alas, it worked out splendiferous.

As an aside, when packets were pushed above the IP fragmentation threshold, data rates were about the same for plain and encrypted transfers.

Now, what kind of overhead does this create? On our Pentium-D client, we only see a 2% increase of combined user and kernel space resources over unencrypted NFS transfers. Due to NFS's limited resource usage, this equates to a negligible use of resources, which means scalability is huge for this implementation.

Overall, the tests were a success. We're going to be rolling this out to all of our development network servers in the coming weeks, so I'll be sure to post more about what happens.


Tuesday, May 9th

NFS over SSH Tunnels

So I just spent the good part of today trying to get NFS to run over SSH tunnels. After a long journey, I've finally got something worthy to share. So here we go. The two servers used were a Slackware Linux box (10.1) as the client and a FreeBSD box (5.0) as the server.

DISCLAIMER
Before you read, remember: this worked for me!!! It MAY NOT WORK FOR YOU! And don't blame me if you do this and something bad happens to your server. This is informational only - and I won't help you out if you have problems. It's just another resource in the sea of information available to you.
-----------------------------------------------------------------------------
Basic Daemon Config
One of my "pet projects" in the lab is to build a production-quality network using only open source products (read: Linux and BSD). One facet of this lab is an NFS drive, so servers and clients in this network can share common configuration files, user homes and other information. However, as our area of the Firm focuses on security, we're trying to go the "extra mile" to make sure our network is secure from the start... which means NFS either needs a pretty good security model around it, or it's out the door. For those unfamiliar with NFS, it stands for "Network File System", and is the "old way" of sharing files across UNIX networks (before the wonderful (?) days of SMB). It's main form of access provisioning is in the EXPORT file, which lets you limit by IP address... and as any security professional will tell you, this is not very hard to hack around.

Now you may be asking yourself - "Noah, why not just use SMB?" Simple answer? We're a mixed environment, with BSD and Linux clones making up the majority of the network. And the last thing I want is to try to manage a kernel hack (because really, for BSD thats what SMBFS is) in a production-quality environment. So we're left with trying to lock down NFS, or making some back-plane that would be subject to attacks should a computer get compromised. That, and I don't have that many spare ethernet cards laying around. :]

So back to the NFS problem - how do we make an insecure protocol secure? Simple answer: SSH tunnels. SSH can be installed on pretty much any operating system with no trouble, and it provides a very high level of security - SSH tunnels are very hard to crack into. However, since NFS is a mixed transport-layer protocol (meaning it's UDP and TCP based), it makes it hard to apply tunnelling encryption - it's nearly impossible to encrypt a stateless protocol like UDP. So what's the fix? Forcing the NFS client to talk only over TCP, and forcing the NFS daemon to only accept requests over TCP. And since we're trying to limit who has access to the NFS daemon, we'll bind it only to localhost. In Linux and BSD, this method of startup for NFS looks like this on the command-line:

/usr/sbin/nfsd -n 10 -h 127.0.0.1 -t



Note the "-n 10" - meaning be prepared to serve up to 10 clients at once. But before you do this, you'll also have to make sure mountd starts up correctly. In particular:

/usr/sbin/mountd -r -n


Which tells 'mountd' to allow mounts for regular files (required for our lab), and more importantly, to accept mount requests from users on unprivileged ports. We'll get to why that's important in a minute.

The Exports File
Now we've got an NFS daemon running on localhost (away from nasty intruders on our LAN), we had better config our EXPORT file. We'll use a basic example, as our lab is kinda extreme:


#/etc/exports
/mnt/export -maproot=root -alldirs 127.0.0.1


This tells NFS to only allow access to /mnt/export and all of it's subfolders (what we're sharing) to 127.0.0.1, and giving root access to it. The "-maproot=root" option means we'll leave it up to the client to assign access controls. Our lab is fairly client-heavy for security when it comes to NFS, due to the tunneling restrictions we have in place.

Into the Tunnel
Now provided all the daemons started up correctly, we're faced with an interesting problem - we cannot connect to our share in EXPORTS... we'll be denied access since we're not coming from 127.0.0.1. So how do we get to it? SSH Tunnels! This was the whole point to binding to localhost, allowing only localhost, etc. Now I won't step through how the tunnels were provisioned (in terms of access), as there are many how-to guides on that. Instead, we'll assume your private key is in "/etc/tunnel.key", and your user on the otherside of the tunnel is "tunnel". Got it?


The first thing we need to do is make a tunnel for our RPC traffic. We need RPC so we can figure out what port our mount daemon chose to run on. There is an alternative to this - you can run mountd with the "-p XXXX" option, where XXXX is your chosen port. Though I lothe RPC, I'm a traditionalist and tend to use it how it was made... even if it sucks. At any rate, you'll need to have RPC open anyway for the mount call to succeed. To first establish the tunnel, run:

ssh -fqN4 -i /etc/tunnel.key -L 111:127.0.0.1:111 tunnel@NFSHOST



Where NFSHOST is your... you guessed it! NFS host. Once this runs, run:

rpcinfo -p 127.0.0.1



To verify your tunnel is up. You should get an RPC listing that contains the port number for the TCP version of mountd. An alternative is to do a:


rpcinfo -p localhost | grep mountd | grep tcp | awk -F ' ' '{ print $4 }' | uniq



Which is a small awk/grep/etc thingy to pull the port number. Once you've got the port, make two more SSH tunnels, one for the mount daemon and one for NFS:


ssh -fqN4 -i /etc/tunnel.key -L MOUNTD_PORT:127.0.0.1:MOUNTD_PORT tunnel@NFSHOST
ssh -fqN4 -i /etc/tunnel.key -L 2049:127.0.0.1:2049 tunnel@NFSHOST



Where MOUNTD_PORT is the port pulled out for mountd from the RPC call. You've successfully tunneled!

Making the Mount

The last part - and the easiest - is to make the NFS mount from the client machine. To do so, simply mount like so:

mount -o rw,rsize=32768,wsize=32768,tcp,port=2049,nolock 127.0.0.1:/mnt/export /something



Where 127.0.0.1:/mnt/export is the mount point defined in EXPORTS, and /something is where it'll be attached on the local machine. This may take a few minutes - depending on your network, computer speed (you're running over encrypted channels now) and NFS client - but it should work. If not, then you're not as (un)lucky as I am, and should look for something else to help you out. :]

A Little Discussion
So what did we actually accomplish here? We're now tricking NFS into believing our localhost has a NFS daemon on it that it can mount from. Using SSH tunnels, we're actually forwarding these connections off to our file server, and since its SSH using public/private keys, it's all nice and secure. All of our traffic travels over secure tunnels - from basic connections to file transfers. To cap off what we did, we made sure all services were binding to localhost, and that the tunnels were not accepting connections from anyone BUT localhost. While there are still some security holes - namely unprivileged users could use the tunnels to mount their own NFS shares - we're much closer to being "secure" than we were without the tunnels.







Sunday, April 30th

Welcome Back

Hello, noble readers.... I am back! Thats right, after a long stretch of inactivity, I am back in the blogging world. It only took about 7 months (yep, my last post was in October), but I made it. After reading dusty's journal for awhile, I was inspired to re-write this hunk of junk of a journal software, so its much easier for me to make posts. What you'll see in the not-to-distant future is just the shell of what it will eventually be.... but for now, you'll only see posts, and perhaps a little month-by-month navigation mechanism over to the left.

So what's been new? What have I been up to? Lets see -- work, hanging out with my family, working on the basement and landscaping, and, on occasion, playing SimCity.

First, work has been going great. I've been doing a small amount of travelling to Chicago, and hopefully this week I'll hook up with Big Mike to eat at (according to Mike) the the best pizza place in the center of the world. My manager PJ will be working two clients down there, so I'll be pretty much on my own for the whole week... which is scary but neat (not many first year associates get to be on their own).

The family: Wes turned 1 at the beginning of the month. It's hard to believe how fast he's growing. He's walking around, saying "hi dad", "uh oh", "all done" and a host of other phrases we can't quite understand. He's currently taking a nap, though he's really just having some time to himself... he's weird that way... sometimes he just wants to sit in his crib alone and sing. Another American Idol? We'll see. Amy's still doing copyediting on the side, while taking care of Wes and Amanda and Dave's little boy Jake. Wes is a lucky little guy.

The house: I have a door AND a doorway. That's right, yesterday Michael and I bought a door for the laundry room at Menards and screwed it in. It's not bad at all. We've also started a landscaping project outside, which has been on hold due to the crappy weather. We've got about a fourth of it cut in and laid, but there's still plenty to go. After that's done, we'll start ripping out what's there, and replacing it with our overall "scheme". Just like the basement "scheme" (aka my eye).

Well, thats it for now. Thanks for coming back reader... I promise they'll be more to see.


Tuesday, October 4th

The edge of forever

So I got up this morning to the sounds of my little boy singing. Of course it was 5:45am... but it was ok. If I've got to wake up, its better to have him singing as my alarm than that annoying beeping sound my alarm makes. So as I way laying back in bed (after I got up and brought Wes to Amy), I started thinking about what I was going to do today. It's usually a ritual of mine to lay in bed half asleep, trying to figure out what that one thing was that I was supposed to remember about a given day. You know, you think to yourself before you go to bed that you need to do something, with the hope that you'll remember it in the morning, but nonetheless you inevitably leave the house with that "I know I forgot something" feeling. Except today, I had nothing to forget, because I had nothing to remember. For the first time in a very long time, I can just go to work, just come home, and just do what I want with my evening - not having to go somewhere or do something. Its really a great feeling... to know that, if I want, I can come home, eat dinner, and unpack boxes. Or I can go work in the garage. Or, I can start planning out what I want to do with our basement. Or, I can sit around on my butt and watch TV. Its a great, great feeling. :)


In other news, it took over an hour and a half to get to work. I seriously considered getting out of my truck and walking to Hopkins. I'm sure I would have beat some of the other drivers who must think their car can only go 5 mph when it's raining. I'm all for safe driving folks, but seriously, it's not a flood, and the coefficient of friction didn't reduce THAT far with the presence of water on the road this morning. I felt like that Sammy Haggar song "I Can't Drive 55", except it was more like "I Can't drive 5" (and subsiquently, not being able to get it out of 1st gear).

Back to work...


Music: Collective Soul - Perfect Day