Repurposing old dumb PCs as thin clients

If you work at a metal refinery, or anywhere that utilises H2S (Hydrogen Sulphide), you might be familiar with this sight.

2013-10-19 15.59.01These computers are old, and dead, owing to their HDDs being corroded by the rotten-egg smelling gas that floats around the refinery they’re located at.

Our goal at this site is to move everyone around the refinery onto Teradici zero clients, and utilize our VMware View instance. Desktops (including apps and data) will be safe and sound in the datacenter, and the users won’t be subjected to losing their desktops and waiting for a replacement.

We purchased 100 Teradici based Zero Clients, and we were left with a lot of old PCs without a working HDD. What were we to do with them? eBay? Donating them somewhere?

The kill-two-birds-with-one-stone solution was Stratodesk’s “NoTouch” suite, something that caught my eye at VMworld last month. NoTouch OS is a relatively small linux distro that provides clients for most of the major hosted-desktop systems (like VMware View, Citrix, Microsoft RDSH, etc) and gives you the ability to repurpose dated PCs as thin clients.

NoTouch OS is managed through their NoTouch Center software, which they supply as a standalone install, or a virtual appliance. Through it you can group your endpoints and apply settings at any level. You could have a group of endpoints that connect to your VDI system for your employees, and a group of endpoints that provide nothing but a web browser pointing to your timesheet system for contractors to use. It’s very flexible and packed with functionality.

The Stratodesk Virtual Appliance also gives you a PXE boot server, and this is what we’re using for our PC-sans-HDDs. Just import the NoTouch OS image through the virtual appliance’s web interface, configure some boot options, pop some options into your DHCP servers, and away you go.

We’ve got 25 of these endpoints running throughout the plant, and user feedback is good. We had a couple of issues with managing multimonitor modes and auto-assigning endpoints to groups, and their support was both quick and extremely helpful. I had a response within the first couple hours, and they had solved the issues within 6 hours. On a Saturday. At night. And before we even purchased a license!

The PCoIP client built into NoTouch desktop is the official Linux VMware Horizon View client, and NoTouch lets you configure it however you want. Note that with Horizon View 5.3 you’ll be able to do RTAV through the Linux client, allowing you to use your USB webcams through a NoTouch endpoint. This gives it an advantage over the Teradici Zero Clients, as they’re yet to support RTAV (though sources tell me they are working on it).

They offer a free trial that allows you to manage 2 endpoints, I highly recommend giving it a try before you go and purchase any more zero clients. Licensing works out to around $46 per endpoint (retail) including a year of maintenance, and you can purchase licenses either through a vendor or direct.

Here’s a quick video demonstrating PXE booting a PC with no HDD into the NoTouch OS.

I’m finding it hard to justify purchasing more Zero Clients. Feel free to comment below with arguments for/against PC repurposing software like this! I’d love to hear your opinion.

Migrating a View VM between hosts fails at 63%

I had a strange issue come up when trying to vMotion some VMs in our View cluster.

When attempting a vMotion of our Windows 7 VMs, the vMotion would stop at around 63% and spit out the error “Source detected that destination failed to resume”

In the target VM’s vmware.log file I saw the following:

2013-05-16T03:58:16.591Z| vmx| MsgQuestion: msg.svga.checkpoint.gpufeaturecheck.fail3 reply=0
2013-05-16T03:58:16.591Z| vmx| Progress 101% (none)
2013-05-16T03:58:16.591Z| vmx| MigrateSetStateFinished: type=2 new state=11
2013-05-16T03:58:16.591Z| vmx| MigrateSetState: Transitioning from state 10 to 11.
2013-05-16T03:58:16.591Z| vmx| Migrate_SetFailure: Failed to resume on destination.

In this case, the problem occurred due to 3D support being enabled directly on the VM through vSphere, rather than using the pool options on the View Connection Server. Note that while the VM is powered on, VM settings will not show that 3D is enabled – you can only test that this is the case by viewing the VMX or viewing the VM settings when it is powered off.

I solved this problem by changing the pool options to enable 3D and I then waited for View Composer to update the VMs, I didn’t even have to power down the VMs. After View Composer does it’s thing, the VMs will vMotion without a hitch.

Environment:

  • ESXi 5.0
  • View Agent 5.1
  • View Connection Server 5.2
  • VM Hardware Version 8
  • Windows 7 guest OS

Hope this helps!

Ruby 1.9, Rails 2.3, and MySQL on Ubuntu 8.10 (Intrepid Ibex)

I haven’t the time to write out a big fleshy post atm, so I’ll post the short story.

I was trying to set my Ubuntu 8.10 VPS up with a fresh Ruby/Rails install to host some webapps I’m working on. Rather than apt-getting my way to glory, I decided to build Ruby 1.9.1 from source, the main reason being that the version available through apt is only 1.9.0 and I’m working with 1.9.1 on my dev machine.

After building/installing Ruby, I started installing rails, rake, rack, etc… via RubyGems. I knew that I couldn’t install the standard ‘mysql’ gem, as it hasn’t yet been updated for Ruby 1.9, so I added http://gems.github.com/ to my gem sources – if you don’t know how to do this, the command is:

gem sources -a http://gems.github.com

…and proceeded to install kwatch’s mysql-ruby gem.

To my surprise (as it had worked under OSX on my dev box), I got the following message:

plasma@syn-app01:~$ sudo gem install kwatch-mysql-ruby
Building native extensions. This could take a while...
ERROR: Error installing kwatch-mysql-ruby:
ERROR: Failed to build gem native extension.

/usr/local/bin/ruby extconf.rb
Trying to detect MySQL configuration with mysql_config command...
Succeeded to detect MySQL configuration with mysql_config command.
checking for mysql_ssl_set()... yes
checking for rb_str_set_len()... yes
checking for rb_thread_start_timer()... no
checking for mysql.h... yes
creating Makefile

make
gcc -I. -I/usr/local/include/ruby-1.9.1/i686-linux -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -DHAVE_MYSQL_SSL_SET -DHAVE_RB_STR_SET_LEN -DHAVE_MYSQL_H -D_FILE_OFFSET_BITS=64 -I/usr/include/mysql -DBIG_JOINS=1 -fPIC -fPIC -O2 -g -Wall -Wno-parentheses -fPIC -o mysql.o -c mysql.c
In file included from /usr/include/stdlib.h:320,
from /usr/local/include/ruby-1.9.1/ruby/ruby.h:50,
from /usr/local/include/ruby-1.9.1/ruby.h:32,
from mysql.c:6:
/usr/include/sys/types.h:151: error: duplicate 'unsigned'
make: *** [mysql.o] Error 1

Gem files will remain installed in /usr/local/lib/ruby/gems/1.9.1/gems/kwatch-mysql-ruby-2.8.1 for inspection.
Results logged to /usr/local/lib/ruby/gems/1.9.1/gems/kwatch-mysql-ruby-2.8.1/ext/gem_make.out

Instructions on fixing the issue and installing the gem after the jump.

Continue reading

Torrent-X on Windows XBMC (Atlantis)

Torrent-X is a plugin for XBMC that allows you to scan through well-known RSS torrent feeds from sites like Mininova, EZTV, etc… and pipes the torrents you choose through to your torrent client of choice via (usually) it’s web interface. Problem is, it doesn’t work OOTB with the Windows version of XBMC. Here’s how to fix it.

  1. Edit the shortcut to XBMC and remove the ‘-p’ from the command. This will make XBMC save user data into the installation folder, and make it accessible to scripts via the internal Q: drive. This solves the issues the script has with opening the guisettings.xml file.
  2. Edit the default.py file for Torrent-X and change line 1127 as follows.
    Change it from:
    MyDisplay = GUI( "skin.xml", ResPath , "Default" )
    To:
    MyDisplay = GUI( "skin.xml", os.getcwd() , "Default" )

You should then be able to open the script from within XBMC.

- NM

PS: I’ll be posting soon on the subject of my recent trip to Singapore, specifically, what a geek can do in Singapore. I’ll also be posting on how to make a cheap, feature-packed media box.

Mailarchiver 6, the upgrade, and the broken next button…

First thing I see in the morning is an email from GFI saying “YOU HAVE A FREE UPGRADE ZOMG!”. Mailarchiver 6 is out, and it boasts quite a few new features, none of which are documented yet – they still only have the MA5 manual up on their site.

I download the update, schedule an outage (it’s not really used by anyone apart from us, the IT department), and start the upgrade. I get to the ‘You need to update your auditing database by doing this chant/sacrifice:’ part, and the next button proceeds to do sweet f–k all. The installer doesn’t lock up, I can still click back, just not next. How’s that for an error message?

The error, in fact, is that MA5 needs to be upgraded with the latest updates before you can upgrade.

If you’ve come across this issue, here’s the link to the file you’ll need to install in order to progress any further: http://kbase.gfi.com/showarticle.asp?id=KBID003336

Now all I need is some documentation on the new features. It appears that an outlook plugin they’ve made (which I can’t find) allows users to mount their archive in their outlook. If this also allows them to move things to the archive, say, old emails from their 20gb+ worth of PSTs, then this will save admins a lot of time. I’ll find out when I get time.

- NM

Fink Troubles – Cannot perform symlink test

So, I’m trying to start developing synflare.com, using my MacBook and OSX as a development platform.

I need GD for PHP, and I can’t be screwed compiling it from source (that’s why I’ve also moved from Gentoo to Ubuntu. Binaries make life easier). Fink is a brilliant service that provides an almost ‘apt’ way of getting and installing software. With Fink, I can issue a single command in terminal and have GD install itself.

My problem, however, was that Fink wasn’t installing. At the install volume selection screen, I wasn’t able to select my root volume. It’s reasoning was: “You cannot install Fink on this volume. Cannot perform symlink test on this volume because of a permissions problem. Try performing the “Repair Disk Permissions” function in Disk Utility”.

After several Verify and Repair permissions commands, I still had no joy. I noticed that the Disk Utility log was spouting the line: “ACL present but not expected for…”. After some investigation, those lines are merely informational. As Fink still wasn’t installing, I decided to fix them by running ‘chmod -a# 0′ on the directories affected. This still didn’t help! I was at breaking point.

I decided to fix it my way (read: quick and simple).

Entering the Fink Installer Package (right-click, Show Package Contents – in Finder), I could see three scripts in the Resources folder, one of which was VolumeCheck, which basically tells the Installer if you have permissions to the Volume in question. Editing this script, I made sure it did nothing but return an exit-code of ’0′ back to the installer.

Hey-presto, it works, and there’s no noticable issues with Fink. GD installs perfectly.

In a nutshell:

  1. Copy the Fink Installer Package out of the DMG and into your home folder.
  2. Go into the folder where you copied the package, right click on the package and click ‘Show Package Contents”
  3. Navigate to the Resources directory, which resides inside the Contents directory.
  4. Delete the existing VolumeCheck script.
  5. Download this file – volumecheck – and extract it into the Resources directory in the package.
  6. Run the installer!

If you have this problem, and this fix works for you, be sure to post a comment – I’m interested to see how many people this happens to.

- NM

CTCP request handling in Colloquy

Back in the day, when mIRC and NoNameScript were my friends, I used to have a CTCP trigger set up that gave people DCC leeching from me the ability to resume transfers if they disconnected for some reason, amongst other things.

Now that I’m all Mac, I’ve been using Colloquy, and haven’t started looking at scripting for it – that is, until today.

Austnet are now blocking all DCC by default, and the only way to allow someone to send you a file is to issue the user command /dccallow +User <timeout> – adding them to a temporary Allow list. This, of course, only works if your nick is registered and you have identified with /nickop.

In order to save time and allow for the receiving of DCC file sends while I’m AFK, I did some research and found that Colloquy Plugins are the method of choice for handling CTCP requests. Plugins can be created in a variety of programming languages, like Obj-C, Applescript, F-Script, Javascript, Python and Ruby. I decided to do mine in Applescript, mainly because there’s a lot more support for Applescript plugins over the others.

Place this script in your ~/Library/Application Support/Colloquy/Plugins directory, then issue a /reload plugins in Colloquy if it’s already open. People can then use /ctcp <Your Username> DCCALLOW to allow themselves DCC access to your username for 300 seconds. Additionally, if you want to auto-accept DCC requests from strangers, you’ll need to modify your Colloquy settings to allow it.

- NM

Version History from Sharepoint into your documents

So, we had a QA audit come up at work.

One thing that QA loves is the presence of version history inside of the controlled document. There’s good reason for this, but apart from it becoming tedious on regularly edited documents, it’s also quite unreliable as some people may just forgo updating it.

If you’re going to store your documents in a Sharepoint Document Library with version recording, why not just put that info inside your document? Well, there’s probably a few ways to do this, but I opted to use document properties to get the info into the documents.

http://www.codeplex.com/SPDocVersionExport

Edit: Well, I decided that using the document properties isn’t the best way of doing this, it’s much more trouble than it’s worth. I am in the process of building a Word add-in that will allow you to drop in a document’s version history and update it whenever the document is updated.

- NM

Adding MSCRM functionality into MOSS 2007

Just stumbled across an article by Sharepoint MVP Rehman Gul which gives a short guide on adding MSCRM functionality into your MOSS 2007 site.

http://rehmangul.wordpress.com/2007/05/08/ms-crm-and-sharepoint-2007-integration/

This guide is useful for giving you an idea on how the Business Data Catalog works.

On a side note, I don’t know how many of you used MSCRM before v3, but if you did you’d have one hell of a wicked bad taste in your mouth. You probably aren’t using it anymore, or won’t use it again. But, before you give up on it, give v3 a shot. It’s prettier, less painful to manage, and it doesn’t plant itself in your Active Directory anymore. And you get awesome cross-app functionality.

- NM