Archive for the 'Job' Category

Page 2 of 6

NCDA Boot Camp, Day 3

Had the first of two exams today, NS153, and I am happy to say that I passed! I am now half way to my NCDA. After the exam we began covering the material that will be on Saturday’s NS163 exam. After class I was talking with Jon and found out that he wasn’t going to be having dinner tonight because he was short on cash (due to forgetting to process a lot of expense reports, oops.) and so I told him he should come over to my hotel which is literally next door to his because we have free dinner in the lobby. We had some good conversation at dinner about the class, the material, and NetApp in general. Apparently this is the first time Jon has hung out with any of his students outside of class as well, which I thought was odd because he is by far one of the most straight forward, down to earth, and straight up cool techs I have ever met. Well, time to study some more!

NCDA Boot Camp, Day 2

So my first test is tomorrow and in preparation we filled out the necessary forms for Prometric, and as soon as I got my form I had to laugh. According to the paper Prometric still thinks they are a Thomson company. Prometric Oops I then had to explain to the class why I was laughing: 1) Thomson doesn’t exist anymore, we are now Thomson Reuters, and 2) when we were Thomson, we sold Prometric in 2007. Jon said he was going to send an email to his contact to see about getting updated forms. 🙂 The second day of the class went smooth and as I predicted some of the conversations have been very interesting. Also like I predicted Jon is really awesome, he knows the material very well and has no problem admitting if he dosen’t have the answer to a question, but he gets the answer typically while we are doing the labs either by doing some quick research or calling one of his contacts. I can see Jon being a great contact through out my career.

NCDA Boot Camp, Day 1

Made it in to class this morning, bright and early at 8am…and went into the wrong building. I wasn’t the only one though, the entire class including the instructor did. It had to do with the way we were given the address and how the buildings are actually set up inside with the suites. Long complicated explanation short, I found the class. Interesting enough the instructor was one of the last people to arrive (on time I want to add, as I was just really early), and he (Jon Presti) seems really damn cool and I think he is going to be a great instructor. We have decided collectively to begin class at 8:30 instead of 8:00, which for me means another half hour to study before class because, lets face it, I will be up well in advanced. Anyway, the first day was really good. There are 8 guys in the class counting me and according to Jon, this is the first time he has had a class completely comprised of guys from enterprise level storage environments. Based on some of the conversations we had today in class I am thinking the conversations about storage are going to be very interesting and informative. I found out today that of the two exams I need to take to get my NCDA certification, one will be on Wednesday morning and the other will be on Saturday morning. The one on Saturday is supposed to be the more difficult of the two, so I am certain I will be spending all of my “free time” studying. Well, one day down, time to hit the books!

Texas, I Am In You

Flew in to Texas this afternoon (plane landed about 2:30pm Central / 3:30pm EST) and after finding may way out of the terminal I made my way to the bus for the rental car where I ended up getting a Dodge Caliber with a GPS system as my rental. First impression is that it is a decent car but not something I would want to own as I feel very cramped in it. I found my way to the hotel, TownePlace Suites, without to much of an issue. I say without much of an issue because it took me a little while to understand what the GPS (a Garmin Nuvi) was saying, mainly because every time I had to use an on or off ramp from a major highway (by the way Dallas and Las Colinas seems to be nothing but concrete and asphalt!) it sounded like it was saying to take the “sledge road” and that confused me, but luckily I could tell by the map what it wanted. After finding the hotel and getting checked in I decided I had better find my way to the class location so I would at least have an idea of how to get there in the morning. Turns out it’s not very difficult to get there but the main road is closed due to construction, of course, so I have to take a detour that puts me back out on the main road (Walnut Hill) about 15 yards from the driveway to the class location. Thankfully I have the GPS though because it was able to re-route and find a way back to the class location, and seeing as it is pretty straight forward I won’t need to use the GPS in the morning. After finding the class I then decided to go have dinner at a Sushi restaurant that Chuck recommended from when he was down here called The Blue Fish (this location), and I have to say it was very good! After dinner I came back to the hotel to relax and prepare for class, I have a feeling this is going to be a long week.

Renaming Volume Groups

I’m anticipating a project coming up at work that will hopefully allow me to rearrange some file systems on my TSM server and improve some performance. However, one of the things I want to do as part of this project is to rename some volume groups on the housing AIX server so they make a little more sense in the overall scheme of things. Problem is I was not positive as to how this  should be done, so I needed to do some research. I found the commands that I will need to use to implement and make the changes, in theory:

First I need to know what volume groups are on the system, this can be found using the lsvg command:

# lsvg

Next I need to know which disks are in each volume group, this can be found using the lspv command (truncated output):

hdisk0 00cdfe5b0855e5f5 rootvg active
hdiskpower0 00cdfe5b448b684a vg04 active
hdiskpower13 00cdfe5bd1d8fdd5 raweagantsm02 active
hdiskpower92 00cdfe5b4494c652 raweagantsm01 active
hdiskpower96 00cdfe5b448dddcf raweaganarch active

Next I will need to offline the volume group that I want to rename using the varyoffvg command:

#varyoffvg vg04

Now we need to export the volume group so we can later import it with the new name. To export a volume group use the exportvg command:

#exportvg vg04

Now an lspv would show all disks previously associated with the exported volume as having no volume group:

hdisk0 00cdfe5b0855e5f5 rootvg active
hdiskpower0 00cdfe5b448b684a None active
hdiskpower13 00cdfe5bd1d8fdd5 raweagantsm02 active
hdiskpower92 00cdfe5b4494c652 raweagantsm01 active
hdiskpower96 00cdfe5b448dddcf raweaganarch active

Now I can import the old volume group with the new name using the importvg -y command (the -y <volume_group_name> tells the system what to name the new volume group, if this is omitted the system will automatically generate a new one):

#importvg -y raweagantsm03 hdiskpower0 (or any other disk that was part of the volume group)

Now an lsvg should show the new volume group:

# lsvg

Additionally an lspv will show the disk now being part of the new volume group:

hdisk0 00cdfe5b0855e5f5 rootvg active
hdiskpower0 00cdfe5b448b684a raweagantsm03 active
hdiskpower13 00cdfe5bd1d8fdd5 raweagantsm02 active
hdiskpower92 00cdfe5b4494c652 raweagantsm01 active
hdiskpower96 00cdfe5b448dddcf raweaganarch active

Hopefully this works the way I think it should. I have spoken with my local AIX guru and told him my plans and everything seems to check out. Once the new disk comes in for the rest of the project I’ll work on implementing the above.

Success, decrypted

The past few days have been pretty busy for me with the work disaster recovery test going on. It was nice to be back in Philadelphia, as I really enjoy this city; I am always amazed at the beauty of City Hall and the Masonic Temple. However, this time, work did not afford me the opportunity to do any site seeing. The test went well though, in reality it went better than anticipated. The new LTO4 tape drive encryption I implemented went very smoothly. I was able to configure the TS3500 library to communicate with our TKLM server without any issues, which was a very pleasant surprise. Once that was done and the AIX boxes were built we brought up the TSM server, and i have to admit, once the server started reading the first encrypted tape to restore TSM’s database I was elated. Had we not been able to read the encrypted data on that tape we would have been in a world of trouble, especially since I had already moved the entire environments data to encrypted tapes. Once the TSM server was completely up and running all of the necessary restores ran really smooth, and as a bonus the new LTO4 tapes wrote the data back out faster than anticipated. In the end it was a successful test and a pretty decent validation of all my hard work to implement the new encryption method.

Off to Philadelphia

Due to some unfortunate scheduling I had to cut my Father’s Day with Kylie short (dropped her back off at Mommy’s house around Noon) as I have to leave for a DR test in Philadelphia which starts tomorrow morning at 8am. My flight leaves at 1:45 which means I have to be to the airport by 1 the latest. I tried to get a later flight out so I would have more time to spend with Kylie, but the price went up by an additional $600 for the later flight, so sadly I am not able to postpone the flight any later. The only thing that makes it anywhere remotely ok is that Kylie has been with me since Thursday night, so I have had a good amount of time with her the last few days. So off I ago once again to Philadelphia, at least it is a city that I really enjoy being in.

Happy Anniversary to Me (and my job)

I just realized that today marks my 5 year anniversary with Thomson Reuters. When I first started working for the company (February 7, 2005) we were Thomson Medstat, then in 2007 Thomson purchased Reuters (Reuters Article) we became Thomson Reuters Medstat. Eventually after the acquisition was complete we eventually transitioned to what we are now; The Healthcare business of Thomson Reuters. Over the last 5 years my career has gone down a completely unexpected path. When I graduated from college in 2004 my hope was to become a Unix System Administrator, and while I do perform some work on Unix systems I am by no means a Unix administrator. I started out as a humble Operator in (what was then) the North American Data Center (NADC) for Thomson Healthcare and Scientific. I was working third shift (11pm to 7am) and performing mainly system monitoring processes along with data submission (getting customer submitted data off of various media and into our data processing and tracking software) and general data center monitoring. After a little under 2 years I learned that the NADC was moving to Eagan MN to leverage the Thomson West Law campus and there data center. Knowing that I would soon be out of a job, when they asked for someone to go to Eagan to add in the build out of the Healthcare Data Center infrastructure I knew I had to go. So making the sacrifice to be away from 1 year old daughter for 3 months, I packed up and moved to Minnesota; little did I know those three months would turn into almost nine. After all those long months of racking servers, running cables, configuring ILO connections, and trouble shooting hardware issues I came back home, to what I thought was no job. However the day I got home I received a phone call from Bob G. asking me to join his team and to become a Storage Engineer. I’d be responsible for ensuring corporate backups (using IBM TSM) and eventually NetApp and SAN disk. Over the last three years in this role I have managed to become one of the most knowledgeable and turned to TSM admin in the company and gained the title of Sr. Storage Engineer. Had you asked me 5 years ago where my career would take me, I can guarantee this is not the path I would have laid out; but I have embraced the opportunities that were presented to me and here I am. I wonder what the next five years will bring?

That’s Annoying

So the DR failed… The part that bothers me the most is we are not sure why. For some reason we were not seeing the restore speeds that we expected. Building the TSM server was easy, and is almost becoming routine. I had all the tapes in the library before the system was ready, once the system was properly configured and handed over to me I had TSM rebuilt and up and running in under four hours. I had a few issues that I quickly troubleshot and got resolved, but we still didn’t stream from the tapes as fast as we have in the past. At one point I realized the restore wasn’t streaming from multiple sessions; after i fixed that we got some increase in restore speed, but nothing like we expected. I know the nature of this data (millions of small files) tends to cause slow backups and restores, but we have seen much better speeds in the past. I know i will be pondering this for some time to come.

java netapp.cmds.jsh

One of the problems I have with the NetApp filers is the inability to use simple UNIX commands like cp (copy) and mv (move) even though their base OS is Linux. Often times I have needed to use these commands to make backup copies of files I need to modify but have not been able to. There are a number of ways to get around this problem:

  • export and mount /vol/vol0 to a UNIX host and use native UNIX commands
  • CIFS share /vol/vol0 to a Windows host and use Native windows commands
  • create a snapshot of /vol/vol0 and use the Data ONTAP native rdfile and wrfile command

Of course there are problems with these as well.

  • In order to make use of exporting and mounting the volume I would require access to a UNIX host with the appropriate permissions to perform these actions; which often enough I do not have access to.
  • In order to use a CIFS share the filer needs to have a CIFS license in place and have access to a windows system that has access to the filer; and at least one of my filers does not have a CIFS license, and on the ones that do I may not have access to a system that can map the CIFS share on the given network.
  • the Data ONTAP rdfile and wrfile commands are useful, but dangerous. wrfile, which allows you to write to a file, first destroys the file (removes all data from it) then opens it for you to add to. This means if you forget to use rdfile first to see the contents of the file, then you have just erased the file. Correcting this can be as simple as pulling the original file from a snapshot (if you remembered to take one) or as complicated as needing to create the file from scratch.

The solution to all these problems is to use an undocumented java shell on the filer which grants the ability to use cp, mv and other commands.

Here is an example of the command to drop to the java shell and a list of the commands available:

filer01>java netapp.cmds.jsh
jsh> ?
Java Shell commands:
cd [directory]
ls [-l]
cat file
rm file [file2 …]
cp src dest
mv src dest
ps [-l]
kill <-1|-9> threadName
classpath [pathname]
syspath [pathname]
Debug on|off
du [-sk] [files or directories]
java_class [&]
jsh> exit

This command alone has made this class worth it for me, this will come in very handy in the future.