How an Engineer Buys a PC

This week, after 5 years of relentless service, my original $300.00 main home computer died.

While I am pretty sure I could fix the issue with a new power-supply, I feel it merits neither the time nor the expense.  5 years is a long time for a $300.00 computer to serve.

So I thought I would write a quick little article on how I buy PCs.  I think that I have a sensible approach which gives me maximum value for the least expense and least surprise.

First, my priorities for a new PC are as follows:

  • Price
  • Reliability
  • Speed – More is better.
  • Support – Allows me to develop code, play games, watch baseball and listen to music; sometimes all at once.
  • Low noise and heat
  • Longevity – Lasts at least 5 years.  Many times, this directly related to the noise and heat.

Your priorities may differ, but I think that the basic decision making process is essentially the same.

Determine Your Price Point

Determine your price point, and stick to it…  It’s easy to be swayed, last minute, into that  8 core i7 that will sit mostly idle as you surf the web or read email, but what’s the point?

My price point is under $1000.00, the cheaper the better.  However, if I go too cheap, the PC will not be up to the applications of tomorrow.  There is such a thing as too cheap.  Also, unlike many users, I know I will frequently run the computer at full capacity for long periods of time consuming any and all memory the machine has to offer.

I know I will want at least 16G of RAM, a decent GPU and as much CPU as I can get my hands on for the price.

Also, I want SSD for the speed, reliability, noise and heat reduction..

Here is where I am a bit different than most.  Most folks benefit more from a lower number of faster core, however, future performance depend more on parallelization across core and I am ok with a single core performance tradeoff if I get more of them.  This will incent me to write more threaded code when appropriate.

As if in reminder of this fact, as I write this, a 6 of my 8 core are performing a brute force predictive analysis on 12 cross referenced tables ranging up to 300k rows of 40 columns.  This will take a couple of hours where once it took the better part of a day.  I have a fine appreciation for multi-threaded code.

Picking My Parts

So I now surf over to, yes and pick “Build my PC”.  They have a couple of build your own options which include:

  • Case – Important if you intend to add new drives and other devices.
  • CPU – Will determine how fast your computer computes and also is a key factor in noise and heat.
  • Memory – The more the better.  At 12G of RAM, I am still running out.  I am not sure if we will ever have enough memory.
  • Video Card – Important for WebGL development and gaming.
  • SSD – Reliable, fast storage.


The main thing I look for in a case, is no gaudy lights, easy access to the internals and plenty of space for expansion.  For $10.00 extra, ended up with this:


Believe it or not, it’s less gaudy than the others, though some of them have red lights which can be less distracting, I am not sure I would like my room bathed in a red glow at night.  It seems like it could be disturbing…


Personally, I think CPU power is a bit overrated for most folks.  A good SSD drive and some extra memory are often all that is really needed.  Extra core are nice if you have aggressive anti-virus.

Here, I still want the most I can get for the money.

So I compare the walmart offerings and cross-reference CPU for the performance threshold I am willing to pay for.

After comparing each of the CPUs price and power-rating, I ended up with respectable multi-core performance which is about 8/11 the power of an i7 4790k at a fraction of the cost.    A single-threaded, single-core task would be processed at about 1/8 of this overall rating (same as the 8 core i7).  The AMD card cost me an extra $84 while the i7 4790K would set me back an extra $338.00.  Patience, I guess, is a virtue.


Video Card

After saving some money on the CPU, its time to choose the GPU.  The Walmart site offers cards from the very low end to the cutting edge $500.00.  I’m willing to spend a little bit extra here due to my fascination with WebGL and occasional gaming but I’d never spend $500.00.  I work too hard for my money.


Balancing cost/performance, I ended up with the GeForce GTX 750 Ti.  A solid performing Video card that should be more than powerful enough to support my needs without paying a premium.


I am getting a little less than 1/2 the performance for a card which costs less than 1/4 as much.  Also, since I plan to keep the PC for 5 or more years, this might be an incremental upgrade if video price/performance once again becomes compelling.

Memory and SSD

Last, but certainly not least, I choose the highest end SSD and Memory configuration I can get.   It’s a bit of a splurge which adds an extra $201.00 to the overall price, but these are key to good performance for what I will be using this system for.


Wrapping Up

The base cost of this machine is $826.00 with an extra $57.82 in taxes.  With a respectable CPU, video card, SSD, DVD-RW, plenty of expansion slots and 16 GB of DDR3 RAM, it’s a pretty good computer for May 2015.  It’s custom built for me, but fortunately, not built by me.

And this is how a price-conscious engineer selects a computer…

– Pat

Posted in General | Tagged , , | Leave a comment

P5 a Processing.js Alternative


Decent Processing integration into my application “Dex” has eluded me for awhile due to limitations within the JavaFX WebView component.  This is for a number of reasons such as lack of WebGL support.

I was able to achieve a degree of Processing integration by using the Selenium browser automation tool to automate the launching and rendering of dynamic output to a cobrowser.  It works fine for the most part, but its far from optimal.

While I plan to continue supporting Processing.js, I took the time to evaluate and decided to integrate P5.

What is P5?

p5.js is a JavaScript library that starts with the original goal of Processing—to make coding accessible for artists, designers, educators, and beginners—and reinterprets this for today’s web.  p5.js is a new interpretation, not an emulation or port, and it is in active development.

Moreover, links directly to the page, so it would seem that the Processing community has opted to embrace P5 in spite of it’s small differences.

For me, P5 is nice because it brings the capabilities of a processing-like language to the Dex and dex.js platforms.  And this allows for greater accessibility to the artists, designers, educators and beginners that Processing and P5 target.

In many cases, by changing a couple of lines, Processing scripts can be converted to P5 scripts.  P5 also supports touch devices.

Hello Line

The first example within P5 is as simple as you could hope for.  It simply draws a line on the screen with the supplied coordinates.

function setup() {
  line(15, 25, 70, 90);

Saving Sketches to Javascript Variables

Sketches can be stored as javascript variables as seen here:

var sketch = function( p ) {
  var x = 100;
  var y = 100;
  p.setup = function() {
    p.createCanvas(700, 410);

  p.draw = function() {

Then they may be rendered like this:

var myp5 = new p5(sketch);

More Examples

If you are interested in seeing what P5 can do, there are a large number of examples that can be found here.

Thats all for now, time to sleep!

Posted in General | Leave a comment

Unfettered Visualization


I was thinking today that there might be a different way to draw new web visuals.  So far I have used numerous frameworks which provide some canned and variably flexible interface on top of the browser’s innate capabilities.  These range from relatively low level API’s like D3 to higher level ones such as Google API.

They are great, however, inevitably, I seem to hit some sort of barrier expressing what I wish to do regardless of the framework.  I see how one might do it in svg, perhaps the framework chose to express a line as a path rather than a polyline, whatever.  At some point, these helpful frameworks fall short.   Maybe it’s a case of “can’t get there from here” or a case of “don’t know how to get there from here”.  Either way, the net effect is the same — you’re stuck.

So I started thinking, what if one were to draw the diagram they wished to implement in some WYSIWYG tool capable of outputting a fairly minimal form of SVG.  SVG is much more powerful than most people realize.  There are far less barriers to expressing an idea in such a drawing.  Once you draw it, it exists to some extent.  Even if you don’t carry the work forward, someone else might.  The idea is there, fragile as it might be, but it’s there.

Now that our visual, or a drawing of it at least is expressed in SVG, some sort of helpful annotations might aid a SVG to SVG.js type conversion.  Maybe no SVG.js conversions are needed.  Regardless, at this point you have a visual which looks roughly like what you want.  However, it’s a static javascript program incapable of parameterization save for direct DOM manipulation.

You’re far from done, but you really have a tremendous head start at creating a visual expressed either directly through svg or progmatically through SVG.js.   And more importantly, since it’s initial creation was essentially created with no bounds, it is more likely to be original and creative.  When we start with another existing component, we tend to take path of least resistance and in the conversion.  The new component behaves much like the old.  Sometimes this is desirable, but sometimes it is a tradeoff which compromises the pristineness of the original idea.  So the idea is to start unfettered, pen and paper, then SVG editor.

From here, perhaps the annotations could add this automated parameterization which would allow direct control of the view through CSS and also expose non CSS aspects of the svg.

Eventing gaps might be shored up through the adoption of lightweight frameworks such as React.js.  I’m not sure it’s needed, SVG has events already, but we will need a painless way to communicate back and forth between SVG and Javascript.

Imagine a decent UI written in a very thin layer on top of SVG.  I bet it would outperform it’s competitors with a minimal footprint.  This might be important on a large scale SPA application with a large number of pages and controls.

Now to take this notion further, it seems that WebGL is another good candidate for this same approach.  WebGL boggles my mind.  However, with a decent editor I might be able to scratch out a decent prototype and convert it over to a programmable visual.  But let’s walk before we run.

I am planning to try this approach.  If it turns out to be an interesting enough experience, I’ll let you know and probably extend the concept into WebGL.

Thats all for now, take care!

Posted in General | Leave a comment

Checking in…

I continue working diligently on dexjs and in turn importing it’s capabilities into Dex.  It’s probably going to be a month before the first dexjs release, and at least another month before I release Dex.

Anyway, I had alluded to additional capabilities and charts.  I will demo some works in progress here.

Henry Kim emailed me today and mentioned that he wanted something to view X/Y relationships over time and interactively.

It seemed only a slight variation to a MotionBarChart, however, it turned out to be a bit more extensive than.  Anyway, here they are:

And as usual, the static graphics link to live demos:

Motion Circle Chart

Draws circles located at x/y coordinates animated over time.


Motion Line Chart

I then extended the Motion Circle Chart to include an interpolated line.  This one is using a basis spline.


This one is using a “step” interpolator.


I’ve also been integrating google charts.

Pie Chart

Yet another pie chart implementation.  I guess you can never have too many.  This one handles large legends pretty well.


Diff Pie Chart

Now here’s a nice one.  The DiffPieChart.  Concentric rings represent different points in time.  It kind of has me thinking about a MotionPieChart.  Somehow, I sense that Tufte is rolling his eyes in disgust.



The timeline is useful for comparing things over time in a static way.


Diff Bar Chart

Here is the DiffBarChart.  Like the DiffPieChart, it compares two sets of values within an overlaid bars.  I’m going to start using this one at work.


Word Tree

I love the WordTree.   It’s fun to play around with famous speeches like the Gettysburg address and see how structure reveals intent, sometimes underlying intent.

This one is a bit limited, but hopefully one day I’ll talk Jason Davies into letting me use his as a basis.  If not, I’ll probably write one of my own, I like them that much.


That’s it for now.  Take care all.

= Pat

Posted in General | Leave a comment

Time to evolve…

It’s been 4 years since I wrote Dex and began my journey into data visualization.  It’s been gratifying and keeps me grounded in many current technologies.  As the project has evolved, so has it’s needs.  So some of the technology surrounding Dex is shifting around.  I would like to talk about a few of them here.

DokuWiki -> Jekyll

I love DokuWiki, however, I find that I need more and less than it offers.

First, I don’t want to allow online edits.  Having been hacked a few times a year back, I want to keep it simple.  No php scripts, no dynamic content other than some javascript.

Being dissatisfied with the various site-export utilities offered by DokuWiki, I bit the bullet and started migrating the web-site and the internal Dex documentation to Jekyll.

This gives me much more control over the generated site with less worries about being hacked.  There’s nothing worse than opening the main page of your open source project to discover a Viagra ad in it’s place.

Additionally, it will let me to publish the manuals online as well as within the internal Dex documentation.

DexCharts -> DexJS

I have realized that the DexCharts project was really a module within a more general purpose framework which I will publish on GitHub as DexJS.  DexJS will absorb the DexCharts project and the various DexCharts will simply be members of the dex.charts module within DexJS.

NodeJS & NPM

I am using tools supplied via npm.  This makes it easy to manage my javascript tool dependencies in one centralized repository.  Its kind of like rpm on Linux and just as convenient.


I use gulp to generate my distribution.  Basically, it’s a task manager which provides capabilities similar to Grunt which is analgous to Ant in the Java tech-stack.  So far, I am amazed at the power of Gulp.  Gulp and all it’s sub-task are delivered to me via npm of course.


Within my gulpfile.js, I am using task which aggregate the core dex modules into a developer and minified versions with sourcemaps.

The charts are intentionally excluded.  The various charts have additional dependencies which are not appropriate for general use.

I will likely distribute dexjs as a npm module once I fully assess the ramfications of that.


DexJS will require JQuery and Underscore because of the general power and capability that they provide.  Much of what I have already written could, should, and will be refactored to leverage the capabilities of these two libraries.

Dex -> Github

Over time, I am migrating Dex over to GitHub.  This could take awhile, but I’m working towards this.

Posted in General | Leave a comment

Dex & Dex Charts Progress

It’s been awhile since I checked in, so I figured it was about time.

I’ve been busy reworking Dex and it’s sister project Dex Charts.  Here are some of the previews of DexCharts once I am comfortable pushing it back to GitHub.

At a high level, I am builing resizeability into the core charts.  They should lay out nicely by default and only rarely need tweaking.  Having said that, everything should be exposed for the user to tweak.

There are also some new additions to the Dex Chart family.  Click any static content to go to live content.

 Horizon Charts

First of all, here is a rough start on horizon charts:


 Motion Charts

Motion Charts are also a first class component within Dex and Dex Charts.


Motion BarCharts are also new.  Drag the mouse over the year to update the bar-chart with that year’s data.


There’s a lot more which I will cover in subsequent posts.

Posted in General | Leave a comment

DexCharts Update : November 22, 2013

I’ve been working hard on DexCharts.  Here are a few sneak peeks at what’s coming.

All images link to live content


It’s ironic that I had so much trouble with the bar chart visual.  My main trouble has been around how to handle axis.  I decided to make Axis a first class component so that I could decouple it from the rest of the visuals.  I also thought that there may be times when you’d want to treat an axis as a separate entity altogether.  Here’s a demo of a few types of axis.  Note that I haven’t quite fixed the log axis yet.

Here we have a horizontal linear axis with format, another one with some different styling.  Then we have a series of vertical axis: linear, sqrt, log, time and of course ordinal.


Virtually every aspect of the axis can be configured directly from DexCharts.


Barcharts are getting more usable.  I’m not quite there yet, but here is some pretty cool progress.  They are more capable now with much less code.  Why the simplest of charts has been the hardest for me…I do not know.  It’s humbling.  Anyway, here’s some of what the latest bar chart can do:

From the simple…


to a high resolution bar chart…


…to bars with gradients…

image all sorts of crazy fill patterns…


Clustered Force

Clustered force got a makeover…


You can do some crazy things with it…and with all the components actually…I’m supporting svg transforms at virtually every layer.


More soon…

Of course, I broke a few things along the way…which is why I haven’t checked it back into github yet…

I’m also working at providing angular directives which will render DexCharts with all the goodness that angular provides (2-way bindings, etc…)

– Pat

Posted in General | 1 Comment