Reworking my website, CSS/PHP and all that ish

I finally got around to actually making my website look somewhat nicer instead of just picking a an admittedly bad theme and sticking with it. Part of the reason it took me so long was my front end web development skills were a little rusty. I could barely remember all the necessary tags to put together a working HTML file let alone dynamically generate web pages in PHP. Now that the FE is out of the way, my GRE is a ways off, and I’m not back in class yet I have quite a bit of free time after work to pursue this project. My goal was to familiarize myself with some front-end web page development and generate my own WordPress theme to add some more personalized flair to my website.

I didn’t know it at first, but WordPress was a great medium to practice these skills given that it’s both the largest open source PHP project and it was built with easy customization in mind. I had not delved too much into the PHP plumbing that makes WordPress what it is let alone read the documentation or even explore all the settings pages. What I quickly learned was the brilliant way each WordPress page was generated from separate PHP files. I’m not a web developer so this is really my first look at the architecture of a modern website, and I’m actually pretty amazed by it and how similar it is to application programming. A WordPress page is essentially generated by  script that calls a function that generates the header portion, then in a loop querying the content database it calls another routine to generate the posts, and finally another function ends it to generate a footer. There’s more to it then that, but that’s the basic idea.

What really blew my mind when I started thinking about it is that the modern webpage is built on a number of different programming languages/tools that have coevolved and work together seamlessly. It’s not something you run into very often in my line of work, I would never inject Python into C Code. Nor will I use a separate language to define my variables, but stuff like this happens all over in the world wide web. Having PHP scripts generate an HTML file to be parsed by a browser and a CSS file that defines the attributes of each tag kind of astounds me.

As for my theme, at the time of this writing you’ll notice that the page still very much resembles the official WordPress 2013 theme. This is because I decided to take the child theme approach which takes many pages from the book of object oriented programming. I was able to inherit the stable code base and look of the parent theme, while using polymorphism to make my tweaks. When creating a child theme you have the ability to either make changes to the appearance using the CSS file or implement new functionality by changing the PHP files.

CSS I’ve learned is a very neat way of maintaining the look of a certain website. It’s basically a way to make site wide changes to specific tags in an HTML file. These can be fonts, colors, boundaries, alignments, and a whole bunch of other neat tricks. The cascading part comes into play when you start having nested tags. When you write a CSS file, you want to define a set of default values for all the major HTML tags that will be a fall back on. From there you can subdivide the different tags on your website and redefine their attributes to meet your needs. I’ve mostly been playing around with margins, paddings, and fonts. I want to get the basic shape of the site down before I start tweaking other aesthetics like colors and graphics.

The other trick is modifying the PHP. This is a little more advanced and I haven’t played around with it too much. I made one small change to the footer of this site to display my name, but that’s it for now. The trouble with modifying PHP is you have to really know which PHP file does what and have a goal in mind for what you want to accomplish. Without a goal in mind there’s really no point in pursuing a PHP modification because most if not all the tweaks you’d want to do can be found in the WordPress dash. Funny thing about that is, I never took the time to find out how easy it was to just use the WordPress dash to change and add things to the page. Most people could probably get away with never mucking around in the code honestly.

Version Control Software: Git

A few weeks ago I was introduced to the world of version control software by my employer. It was when I had to work from home for the week due to pneumonia and we were in the middle of rolling out Git for all our source code.  My assignments was to read up on Git, learn the jargon, understand the inner workings, and just be fairly well versed on the topic so I can maybe help write scripts in the future. Going into it, I had a rough idea of what version control software was used for and that was to keep track of changes and be able to go back to earlier versions of the source. What I didn’t know was how this simple idea can significantly change your workflow and how you write your code. In my three weeks of learning about Git I’ve since rolled it out on my own coding projects and have developed all sorts of new coding habits. It’s a wonderful piece of software that I believe will make me grow as a programmer.

As I’ve said, version control software basically keeps track of every change you make to your source code allowing the user to revert back to any version of the code. The beauty of this is that you can easily go off and make changes to your code without fear. It gives you a sense of security in that there will always be a copy of your code that both works and compiles correctly. It’s this idea that completely changed how I write code now. I can now make sweeping changes/experiments to my source without having to manually make a backup of my code base or commenting out huge portions of my old code. I just simply delete everything I want to change and replace it with a my new broken code. Then I go on and make changes to that broken code until it hopefully works. If it turns out to be a dead end I’d just go back to my old code that did work and it was like nothing ever happened. There are other benefits to source control such as being able to track changes, but it’s that one idea that pretty much changes the game and enables all kinds of new workflows.

How this is implemented in software though is another interesting topic entirely. Version control basically comes in two different flavors: you’ve got your client-server model which has a central server where users can obtain any version of the source and you’ve got the distributed model where every user has their own copy of the entire source base. Software can opt to store only the diffs as new versions are added or they can store each and every version of the file. They all use clever compression techniques given the repetitive nature of the archived contents. Git uses the distributive model which is what I’ll be writing about. To summarize Git, it’s a very clever way of storing files as they change. A Git repository contains a directory containing each and every version of the source that has been “commited” and a working directory that contains the source code you’re currently working on. Using Git, you “check out” whichever commit you want to make changes to and this replaces your working directory. It’s a very quick and painless process and if you have work that’s not quite ready to commit, you can always “stash” a copy of your work to come back to in the future. When you start using Git with multiple users it gets even more interesting. Because it’s distributed, each user has a local copy of each version of the source. They can push/pull/fetch with a remote server when they need/want too, but for the most part development can be done completely on your own. You can even sync up with other users and collaborate with their source if needed.

With these collaborative features the entire workflow and how releases are maintained changes. In Git, this is referred to as Git flow. The key to understanding Git flow is branching and merging, or rather breaking off from a stable commit and going on to develop some new code that is to be incorporated back into the main trunk. In Git flow you have 5 types of branches you need to understand: the master branch which maintains the most stable code in a release ready state, off of the main branch you can have “hot fixes” which fix critical bugs in released software, develop is the branch that goes off the last master release and is where new code is written, new code is written within feature branches to be merged back into develop, and finally you have release branches which prepare the develop branch for a merge with master. I personally really like this workflow, because it allows changes/development to be compartmentalized. It also makes excellent use of version control’s promise of maintaining a stable code base.

In my time with Git I’ve been responsible for preparing a repository for use on a pretty large firmware project that encompasses at least 6 different hardware products. It’s a pretty tall order for me as a very new engineer and it was intimidating at first, but I was happy to take on the challenge. The difficulty in this task is mostly preparing the code base for distribution and getting it into a position where other collaborators can join the project and actually start to develop their own branches.  A huge part of this is software architecture and how to even share a code base between several different products that don’t even share a common processor. I’ve been finding myself moving code around, trying new implementations, incorporating code for other products, and all sorts of things that could introduce bugs. Prior to source control this would’ve been a painful process of maintaining messy source code with huge portions of redundant commented out code. Now I just branch, replace, test, and merge. The first time I was able to do an operation like that I fell in love with source control.

The FE Exam, aka the Crazy Test

Last weekend I had the pleasure of sitting in the UMass Boston penthouse ballroom to watch the sun rise and set over Boston harbor. Floor to ceiling windows and a seat overlooking the water, it was quite a beautiful view. Only downside was I was there for an 8 hour exam testing me on every single bit of STEM knowledge I learned over the past four years. It was honestly one of the most difficult and mentally exhausting exams I’ve taken in my life. I say that because it’s basically an exam where you have to pull different problem solving procedures and engineering knowledge from your brain non-stop for 8 hours. Another thing, I’d say is that it was also a really easy exam if you knew the material but it’s incredibly easy to simply not know entire portions of the exam. Now why would I ever decide to subject myself to this torture? Technically, the purpose of this exam is to be eligible to call yourself an “Engineer in Training” (EIT). This allows you to eventually take the PE exam to become a licensed Professional Engineer that can sign off on designs and offer consulting work. This kind of certification is only really important in the public sector and consulting industries where accountability and public welfare are of utmost importance. Only after I decided to follow through with taking this exam did I realize how little importance the certification means in my field of computer engineering. In fact there are so few licensed PE’s in my field that it’s actually difficult to find one to train under to fulfill the experiential requirement to the PE license. Still, I decided to follow through with it all the way to the end simply as a matter of personal pride. I wanted to prove to myself that I did in fact learn something in my years at college and it wasn’t all just for show.

To summarize the contents tested on the FE would be to list off all the topics covered in all the STEM classes on my transcript and add another semester’s worth. It covered math topics from Calculus I-III, Differential Equations, Linear Algebra, Probability, and Statistics. Sciences such as Chemistry, and Physics I/II. An unusually large amount of Mechanics topics in statics, dynamics, strength of materials, thermodynamics, and fluids. Then there are the obscure topics of engineering economics, and ethics. And from my field of specialty I got to enjoy a refresher on electromagnetics, electronics, signals, communications, power electronics, linear circuits, computer architecture, programming, and all that good stuff. There’s probably more that I forgot, but I just can’t recall right now. I spent two months studying for this exam coming home brain dead from work only to work through topics night after night. It made me realize how much knowledge they stuffed down our brains in engineering school. Kinda made me feel all fuzzy on the inside actually reviewing it all, so many memories made learning it all.  It was also just so much to relearn and that I admittedly wasn’t able to cover everything. I had to be smart near the end reviewing topics I can understand after a quick refresher and completely ignoring topics I have to learn from essentially nothing. Most notably, I found myself generating random numbers to answer the fluid dynamics questions, not my finest moment but I can expect to get about 25% of those questions right. Overall though, I felt confident leaving that test an hour early that I gave a passing effort. It’s now just a waiting game to see if that feeling is a reality.