Dual Booting Ubuntu/Windows on the Acer S7

I got my new SSD drive in the mail a few days ago and I’m only just now settling into a stable system after clean installing Ubuntu 14.04 and Windows 8.1 on it. It was a relatively painless process, but I just didn’t have a nice long block of time to just power through it. On the upside, I’m pleased to report that this setup runs pretty nicely on my S7-392. Not a single huge issue other than the data on my old SSD is now inaccessible short of reconfiguring my laptop to boot into it, but more on that later.

To start off, I tried to keep my old Windows installation by cloning my disk image to the new drive using my external enclosure and a few pieces of software. These efforts were in vain due to two limiting factors. The first, which caused me a lot of head aches, was that Acer used a very special SSD configuration that is almost certainly not used by too many other vendors. I mentioned earlier that the Kingston SMSR150S3/128GB is configured as a 2x64GB RAID0, but didn’t think that was really anything special at the time. Turns out this isn’t a software RAID0 like a lot of the setups you read about in single drive systems. It’s actually a hardware RAID0 that works by implementing two independent SSDs on the same board and is using a special connector to communicate with two drives on the same connector. I looked into the pin-out for the mSATA connector and many of the pins are labeled “reserved” or “Vendor-Specific” which means it isn’t outside the realm of possibility for Acer/Kingston to implement this hardware RAID0 on a slot. The trade-off is that they were able to double(!!) the throughput of a SSD in exchange for a less data reliability. Where this complicated things for me was that all the free software I tried which ran as a boot disc failed to recognize the hardware RAID0. I also had trouble with software running in Windows; admittedly, I didn’t try too hard in this attempt because most of it was gross freeware. I then decided to just go the “nuclear option” and just clean install everything, no back ups other than what’s stored on the cloud and copying files from the drive in the future as necessary. This would bite me in the ass later when I discovered that (obviously) my external enclosure is useless for this special drive since the data is split between the two drives and the enclosure only had access to one 64GB drive (that looks like garbage to it). I may just have to sell this drive as a replacement for someone else because it can only ever function as a 64GB external SSD. In hind sight the right course of action may have been to image the drive onto an external and restore that to the new drive, but I didn’t have a working external and didn’t want to buy/wait for one.

Now because I had an ultra book, they did away with optical media so I had to prepare all my boot media on USB drives. Microsoft solved this pretty elegantly by providing the create media tool. For Linux distributions there’s the Universal USB Installer. This wasn’t a problem because I’ve amassed a large number of USB flash drives from my Cyber Security project. Running the installers for both OS’s was a very smooth/pleasing process nowadays thanks to decades of developing ways to deploy these complex systems. I should note that to get my product key I had to use the Magic Jelly Bean utility and I had to call Microsoft activation line to reactivate it. This may be the first time I did a clean install of two operating systems without any major hardware issues. Ironically (or perhaps not), I had to fuss with drivers more on the Windows side than the Linux side. For partitions, I settled on a 140/60/60GB split between Windows, Ubuntu, and storage respectively. My argument was that Windows was tight before the upgrade so I gave it some extra space to grow, Ubuntu is pretty lean so I think 60GB is fine, and I don’t store too much media locally so 60GB is fine for that. We’ll see how this plays out long term, I think my next evaluation will happen around the release of Windows 10.

The real issue was getting know the UEFI framework, and getting it to do what I want it to do. UEFI, is basically a modern upgrade of the BIOS system and as a developer I understand the need to break with the old. It has its own partition and OS which controls the boot process from hardware and transfer of control to an OS loader/OS. One non-obvious(to me) step that to change important UEFI settings you needed to set a supervisor password first. At that point, I could change the boot priority of USB devices, disable secure boot(!), switch from RAID0 to AHCI, and manipulate various other flags. Secure boot is a way for UEFI to authenticate the operating system and seems to open the door for vendors to lock computers to the operating system that’s installed on the system. Unless there’s an easy way for the me to replace the authorized key and sign my own operating systems, I don’t see any other valid use case. What’s more troubling is I noticed that UEFI and Windows have a strong tendency to prefer Windows and certain steps tended to obliterate my ability to boot into Ubuntu. This happened after I had everything working and decided to update my UEFI firmware. It completely nuked my GRUB bootloader and to get it back I had to experiment with various boot discs until I tried the boot-repair disc (boot-repair is not available on the 14.04 live disc). This fixed the UEFI settings, but required manipulating the UEFI settings on the Windows side to get it to boot them correctly. After this, I was up and running finally.

Update (7/30/15):

Installing Windows 10 over this setup wiped out GRUB (no surprise). All I had to do to get it back was to first get the boot-repair-disk again. Fn+F2 at the Acer logo to get to the BIOS menu to disable secure boot and change the boot order to boot from USB first. Use recommended repair within boot-repair and restart. Finally, inside an admin command prompt under Windows run “bcdedit /set {bootmgr} path \EFI\ubuntu\shimx64.efi”. After this step the computer rebooted into GRUB with no problems 🙂

Shopping for a New SSD

As I’ve blogged before my primary machine is an Ultrabook which has served me really well over the past year. The only problem was that it shipped with only a 128GB SSD. For the most part this hasn’t bothered me and if I wanted to I can probably continue to get by. That’s all thanks to the magic of cloud computing and the ubiquity of cheap data storage, processing power, and content delivery networks. However, I still consider myself a power user when it comes to computing and I’ve decided that the cost of doubling my storage space is more than offset by the peace of mind that comes with it. This also gives me the opportunity to finally have a native Linux installation instead of relying on VM’s and the cloud to get my fix.

Anyway, the thing I love most about shopping for computer hardware is that there  isn’t as much marketing fluff to decipher compared to other products. The trade-off is that you have to know what you’re looking for. On the upside I relish the idea of learning about these intricacies. To start off, I took apart my computer to remind myself what this SSD looked like and the connector that went with it. It was a Kingston 2x64GB SSD, 50mm wide, and connected to an m-SATA(mini) port. CrystalDiskInfo reveals more info about the SSD’s features such as APM, NCQ, TRIM, and DevSleep from its SMART statistics. Some Googling reveals that it’s also configured for RAID0 and that the port supports SATA-III. RAID0 provides a small performance boost by distributing data over two separate drives, but offers no data redundancy. It doesn’t look like setting it up on a new SSD for this machine is a simple process and probably not worth the performance gain. All m-SATA drives are SATA-III and thus have a peak 6GB/s transfer rate.

I then turned to differentiating factors between drives and it turns out there aren’t too many other than price/size/reliability/support. This article is a pretty good overview of what to look for in SSD. From what I’ve read speed is only a very small factor for SSDs because the difference is often imperceptible for user applications. Price is a function of size/reliability and I’ve budgeted <$200 for this project. I’m really only looking at drives in the 256GB range which will comfortably host two operating systems, supporting applications, and whatever files/media I may want to store locally. That leaves reliability and support which are primarily functions of the vendor.  Reliability is worth noting with regards to SLC, MLC, and TLC which basically boil down to how many bits are stored per NAND cell (1, 2, and 3 respectively). This basically trades off cost for data reliability and will increase your number of read/write cycles before failure. MLC should be fine for my purposes, because I don’t read/write huge chunks of data frequently in my work. TLC is probably fine too given sophisticated wear leveling algorithms and free cells available to the controller, but these drives don’t seem to be overly available from reputable vendors. I end the discussion with vendor support which is basically the sum of general reputability and warranty terms which is summed up pretty nicely by this post.

I hate to leave my decision to such simple terms as price/vendor, but the reality is these products are all nearly identical. My choice ended up being the Crucial M550 drive listed on Amazon for $105 (0.41/GB).  It should be coming in soon and I hope to write a post on getting dual-boot working on this machine.

Dedicating Time for Your Hobbies

It’s a bit disappointing that my last post was over 5 months ago on a project I have yet to complete. Disappointing not in the sense that I have accomplished nothing in those 5 months. The reality is the opposite, in that time span I’ve solved countless problems, using all sorts of new tricks and consumed an abominable amount of esoteric technical literature. The problem I’m tackling today however is the cop-out of being “Too busy to blog about it”.  We really do live in an age where everyone always just seems “Too busy” for anything and unable to this or that. Yet here we are with the highest standards of living and greatest amounts of “leisure time” ever in the history of man. It’s a personal paradox that I intend to debug and resolve.

The first question to answer is  “Why am I busy?”.  That’s an easy one. I work a full-time engineering job and pursue a master’s in computer engineering on the side. On top of that I must be an adult by supporting my biological needs of food and shelter. Further, I need to keep up social relationships and entertain a significant other. Seems like a lot of obligations so it’s a wonder I can get anything done. However, let’s run the numbers just to be sure. There are 16 waking hours to each day. Over 7 days, that’s 112 hours to live each week down from an initial total of 168. Let’s say 42 of those hours is spent in the office and another 5 hours commuting each week, that leaves 67 hours. For graduate school I take one class with 4 hours of lecture and 8 hours of assignment; that leaves 55 hours. It’s safe to say that my 1.5 hour morning routine of hygiene, breakfast, and exercise is dedicated entirely to biology each day. Tack on another 10 hours for food related activities each week, I cook most of my meals so it does take a bit of my time. That leaves about 35 hours after biology. Now let’s apply an engineering assumption and say I lose 25% (9 hours) of that time due to inefficiencies. Inefficiencies can be anything from travel time, sloth due to ego exhaustion, small bits of dead time, environmental factors, adult responsibilities, or any number of reasons. That leaves me with 26 hours or 15% of the week to schedule for my own personal use or activities with other people.

26 hours doesn’t sound like a lot of time, but it’s more than a whole day every week of useful time to do anything my heart desires. The hard question to ask then is “Why do I think I’m busy?” (when I’m really not). To put it another way, why can’t I account for these 26 extra hours in my week when I supposedly can’t find the time to even run these calculations. Thinking about it now, the answer to the latter is that I can only account for budgeted hours each week. Those tasks have a number or one can be easily teased out. All my budgeted time falls into the categories of either the “mandatory” or “mundane”. Of course plans are made for the other 26 hours here and there, but it’s never set in stone from week to week and they almost certainly won’t fill up the full 26. What happens then is that when I find myself without a plan for these hours they are consumed with whatever activity my mind automatically drifts to. That of course is almost never an ideal use of time.

What is the solution then? I now know that I have 26 free hours each week that I can use to enrich my life. Ironically, the only I can see to it that those 26 hours are used up effectively is to make that number as close to 0 as possible. What I mean by that is  to devote time to my hobbies or aspirations each week, add it to the budget, and make it mandatory. A recent example of this was how I managed to add regular exercise to my life by simply waking up early and going to the gym before work every week. Only with the mindful act of setting aside time that must be consumed on a specific task can I make regular progress towards my goals/hobbies. Otherwise, I will be doomed to the lack of ambition that comes from running on autopilot. Anyways, this has been an eye-opening introspection of how I spend my time and I very much look forward to writing future posts and making progress on my projects.