Making a Mac Useful, Part 1: Why Are You Hitting Yourself?


OK. This is going to be a deep dive, and it may take some time: I’m going to review the unbelievable list of tasks necessary to make a Mac OS X system usable for my daily work.

Now, this isn’t particular to Macs, nor even particular to desktop systems. Usually, when I get a new desktop or laptop or tablet or phone, I’m up and running in a few hours – sometimes, a few minutes – but for the next several weeks, I find myself cursing as I realize yet another program or setting hasn’t propagated over to the new machine.

That wouldn’t be much of a problem … except I do most of these tasks when I first get a machine, and I don’t update my machines often. I update phones roughly once a year, and laptops twice every few years – twice, since my work MacBook Pro and my home MacBook Air get refreshed on around the same schedule. While it’s easy to remember to toss a half dozen apps onto a phone and tweak a few settings when you get it, the more complex configuration tasks for a desktop operating system, sometimes involving multiple steps and research, are something that slowly evaporate from my memory over two or three years.

This is the kind of problem that the Chrome OS by Google is designed to solve: a system which ties all your configurations to your account, so if you toss your laptop into a wood chipper, you can get a new one and pick up literally where you left off. Unfortunately, a browser only operating system really doesn’t work for me. I am primarily a producer, not a consumer, and my daily work environment is filled with programs like Word and Excel and Photoshop and Illustrator and Acrobat and Ecto and Python and Bash and J and Aquamacs and Vi and Eclipse and MAMP and Gimp and so on and so forth.

So I’m more than willing to put up with this once or twice every two or three years. Hopefully, by blogging about it, I’ll get a better grip on the process, and so next time, it will be easier.

SO I got me a new Macbook Air with a half-terabyte hard drive, and planned to make this tiny aluminum wedge into my primary computer, replacing both my old MacBook Pro “server” and my MacBook Air mobile writing computer. I began configuring it, writing the list of tasks down, expecting it to take a page or so.


That list rapidly spiraled out of control, so I never started that blogpost, even though I got the new MacBook Air configured so well it did indeed become my primary machine. I carry it everywhere, use it for everything – well, almost everything. It was missing only one critical feature: a connected printer – natch, it is a lightweight laptop.

I do have a Canon MX870 multifunction printer-scanner-copier hooked up to my old MacBook Pro, but that MacBook Pro was getting so long in the tooth that I was afraid to turn it on, and when I did so Chrome complained that it couldn’t update because my OS was unsupported and Apple complained that the OS was out of date and my neighbors complained because every time I moved the mouse their TV flickered. So, I decided to bite the bullet and replace it, ultimately with a shiny new iMac.

Which brought me back to this list.

Now that I’m doing this process twice, in close succession, I have the opportunity to find out what’s really necessary, and can see where I’ve missed steps. I’ve broken this list into two parts – one very, very long document in which I am documenting, for my own wordy gratification, ALL the tasks that I have to do to make this new Mac useful to me, and then this series of bite-sized articles, which breaks that apart into small logical chunks. By the time I’m done, I’m guessing there will probably be a dozen articles in this series on Macs alone – not counting setting up Windows boxes, or phones, or the work I’ve had to do on my development environments.

To some, this might seem not just a deep dive, but off the deep end. But there’s a dual method to this madness.

First, having this information on the Internet makes it searchable. Many a time I’ve followed a set of directions related to some computing task and found them nearly useless, and only by piecing together clues from half a dozen different pages online have I been able to, somehow, adapt a solution to the problem. (I have no idea where I might have picked up that problem-solving strategy).


But often the information is not available at all. Even doing this blogpost on the new computer required doing several tasks which were simply not documented anywhere. That’s a blogpost for another time, but hopefully, putting this information up there will help change that.

The second reason for documenting this so thoroughly is to put, on record, how difficult it is to use even the easiest of the modern desktop computer operating systems (again, excluding Chrome OS, which does not (yet) compete in feature parity with standard desktop operating systems). I’m a computer scientist with a PhD in Artificial Intelligence who currently works with four different operating systems, and I’ve got thirty-five years experience working with dozens of different kinds of computers – and if I have trouble with some of these tasks, what hope does a non-specialist have of fixing their brand new shiny money-burner when it decides to become non-functioning, or, more insidiously, simply fails to work as expected, in some subtle and hard to debug way? As my wife says, there’s no hope: she claims the typical user needs to hire someone to help them out, and that’s why the Geek Squad does so well.

Maybe she’s right. But, I hope by putting some of this information out there, I either help some poor shmoe just like me solve their problem … or convince an operating system designer to start thinking energetically about how to make the problem just go away.

-the Centaur

Next up: why pick a (new) iMac?

Talent, Incompetence and Other Excuses

lenora at rest in the library with the excelsior

The company I work at is a pretty great place, and it’s attracted some pretty great people – so if your name isn’t yet on the list of “the Greats” it can sometimes be a little intimidating. There’s a running joke that half the people at the firm have Impostor Syndrome, a pernicious condition in which people become convinced they are frauds, despite objective evidence of their competence.

I definitely get that from time to time – not just at the Search Engine That Starts with a G, but previously in my career. In fact, just about as far back as people have been paying me money to do what I do, I’ve had a tape loop of negative thoughts running through my head, saying, “incompetent … you’re incompetent” over and over again.

Until today, as I was walking down the hall, when I thought of Impostor Syndrome, when I thought of what my many very smart friends would say if I said that, when I thought of the response that they would immediately give: not “you’re wrong,” which they of course might say, but instead “well, what do you think you need to do to do a good job?”

Then, in a brain flash, I realized incompetence is just another excuse people use to justify their own inaction.

Now, I admit there are differences in competence in individuals: some people are better at doing things than others, either because of experience, aptitude, or innate talent (more on that bugbear later). But unless the job is actually overwhelming – unless simply performing the task at all taxes normal human competence, and only the best of the best can succeed – being “incompetent” is simply an excuse not to examine the job, to identify the things that need doing, and to make a plan to do them.

Most people, in my experience, just want to do the things that they want to do – and they want to do their jobs the way they want to do them. If your job is well tuned towards your aptitudes, this is great: you can design a nice, comfortable life.

But often the job you want to do requires more of you than doing things the way you want to do them. I’m a night owl, I enjoy working late, and I often tool in just before my first midmorning meeting – but tomorrow, for a launch review of a product, I’ll be showing up at work a couple hours early to make sure that everything is working before the meeting begins. No late night coffee for you.

Doing what’s necessary to show up early seems trivial, and obvious, to most people who aren’t night owls, but it isn’t trivial, or obvious, to most people that they don’t do what’s necessary in many other areas of their life. The true successes I know, in contrast, do whatever it takes: switching careers, changing their dress, learning new skills – even picking out the right shirts, if they have to meet with people, or spending hours shaving thirty seconds off their compile times, if they have to code software.

Forget individual differences. If you think you’re “incompetent” at something, ask yourself: what would a “competent” person do? What does it really take to do that job? If it involves a mental or physical skill you don’t have, like rapid mental arithmetic or a ninety-eight mile-per-hour fastball, then cut yourself some slack; but otherwise, figure out what would lead to success in the job, and make sure you do that.

You don’t have to do those things, of course: you don’t have to put on a business suit and do presentations. But that doesn’t mean you’re incompetent at giving presentations: it means you weren’t willing to go to a business wear store to find the right suit or dress, and it means you weren’t willing to go to Toastmasters until you learned to crack your fear of public speaking. With enough effort, you can do those things – if you want to. There’s no shame in not wanting to. Just be honest about why.

That goes back to that other bugbear, talent.

When people find out I’m a writer, they often say “oh, it must take so much talent to do that.” When I protest that it’s really a learned skill, they usually say something a little more honest, “no, no, you’re wrong: I don’t have the talent to do that.” What they really mean, though they may not know it, is that they don’t want to put in the ten thousand hours worth of practice to become an expert.

Talent does affect performance. And from a very early age, I had a talent with words: I was reading soon after I started to walk. But, I assure you, if you read the stuff I wrote at an early age, you’d think I didn’t have the talent to be a writer. What I did have was a desire to write, which translated into a heck of a lot of practice, which developed, slowly and painfully, into skill.

Talent does affect performance. Those of us who work at something for decades are always envious of those people who seem to take to something in a flash. I’ve seen it happen in writing, in computer programming, and in music: an experienced toiler is passed by a newbie with a shitload of talent. But even the talented can’t go straight from raw talent to expert performance: it still takes hundreds or thousands of hours of practice to turn that talent into a marketable skill.

When people say they don’t have talent, they really mean they don’t have the desire to do the work. And that’s OK. When people say they aren’t competent to do a job, they really mean they don’t want to think through what it takes to get the job done, or having done so, don’t want to do those things. And that’s OK too.

Not everyone has to sit in a coffeehouse for thousands of hours working on stories only to find that their best doesn’t yet cut it. Not everyone needs to strum on that guitar for thousands of hours working on riffs only to find that their performance falls flat on the stage. Not everyone needs to put on that suit and polish that smile for thousands of hours working on sales only to find that they’ve lost yet another contract. No-one is making you do those things if you don’t want to.

But if you are willing to put those hours in, you have a shot at the best selling story, the tight performance, the killer sale.

And a shot at it is all you get.

-the Centaur

Pictured: Lenora, my cat, in front of a stack of writing notebooks and writing materials, and a model of the Excelsior that I painted by hand. It’s actually a pretty shitty paint job. Not because I don’t have talent – but because I didn’t want to put hundreds of hours in learning how to paint straight lines on a model. I had writing to do.

Humans are Good Enough to Live

I’m a big fan of Ayn Rand and her philosophy of Objectivism. Even though there are many elements of her philosophy which are naive, or oversimplified, or just plain ignorant, the foundation of her thought is good: we live in exactly one shared world which has a definitive nature, and the good is defined by things which promote the life of human individuals.

It’s hard to overestimate the importance of this move, this Randian answer to the age old question of how to get from “is” to “ought” – how to go from what we know about the world to be true to deciding what we should do. In Rand’s world, ethical judgments are judgments made by humans about human actions – so the ethical good must be things that promote human life.

This may seem like a trivial philosophical point, but there are many theoretically possible definitions of ethics, from the logically absurd “all actions taken on Tuesday are good” to the logically indefensible “things are good because some authority said so.” Rand’s formulation of ethics echoes Jesus’s claim that goodness is not found in the foods you eat, but in the actions you do.

But sometimes it seems like the world’s a very depressing place. Jesus taught that everyone is capable of evil. Rand herself thought nothing is given to humans automatically: they must choose their values, and that the average human, because they never think about values, is pretty much a mess of contradictory assumptions which leaves them doing good only through luck.

But, I realized Rand’s wrong about that – because her assumptions are wrong, that nothing is given to humans automatically. She’s a philosopher, not a scientist, and she wasn’t aware of the great strides that have been made in the understanding of how we think – because some of those strides were made in technical fields near the very end of her life.

Rant rails against philosophies like Kant’s, who proposes, among many other things, that humans perceive reality unavoidably distorted by filters built into the human conceptual and perceptual apparatus. Rand admitted that human perception and cognition had a nature, but she believed, humans could perceive reality more objectively. Well, in a sense, they’re both wrong.

Modern studies of bias in machine learning show that it’s impossible – mathematically impossible – to learn any abstract concept without some kind of bias. In brief, if you want to predict something you’ve never seen before, you have to take some stance towards the data you’ve seen already – a bias – but there is no logical way to pick a correct bias. Any one you pick may be wrong.

So, like Kant suggested, our human conceptual processes impose unavoidable biases on the kind of concepts we learn, and unlike Rand wanted, those biases may prove distorting. However, we are capable of virtual levels of processing, which means that even if our base reasoning is flawed, we can build a more formal one, like mathematics, that avoids those problems.

But, I realized, there’s an even stronger reason to believe that things aren’t as bad as Kant or Rand feared, a reason founded in Rand’s ideas of ethics. Even human communities that lack a formalized philosophy are nonetheless capable of building and maintaining systems that last for generations – which means the human default bias leads to concepts that are Randian goods.

In a way, this isn’t surprising. From an evolutionary perspective, if any creature inherited a set of bad biases, it would learn bad concepts, and be unable to reproduce. From a cognitive science perspective, the human mind is constantly attempting to understand the world and to cache the results as automatic responses – what Rand would call building a philosophy.

So, if we are descendants of creatures that survived, we must have a basic bias for learning that promotes our life, and if we live by being rational creatures constantly attempting to understand the world who persist in communities that have lasted for generations, we must have a basic bias towards a philosophy which is just good enough to prevent our destruction.

That’s not to say that the average human being, on their own, without self-examination, will develop a philosophy that Rand or Jesus would approve of. And it’s not to say that individual human beings aren’t capable of great evil – and that human communities aren’t capable of greater evil towards their members.

But it does mean that humans are good enough to live on this Earth.

Just our continued existence shows that even though it seems like we live in a cold and cruel universe, the cards are stacked just enough in humanity’s favor for it to be possible for at least some people to thrive, it also shows that while humans are capable of great evil, the bias of humanity is stacked just enough in our favor for human existence to continue.

Rising above the average, of course, is up to you.

-the Centaur

My New Year’s Gift To You: A Mulligan


If you’re not one of those people who gives yourself too much to do, this post may not be for you.

For the rest of us, with goals and dreams and drive, do you ever feel like you’ve got too much to do? I’m not talking about wanting more hours in the day, which we all do, but simply having too many things to do … period. That sense that, even if you had a magic genie willing to give you endless hours, you’d never get everything you wanted to do done.


To keep track of stuff, I use a Hipster PDA, enterprise edition – 8.5×11 sheets of paper, folded on their long axis, with TODO items written on them and bills and such carried within the folder. Each todo has a little box next to it that I can check off, and periodically I copy items from a half-filled sheet to a new sheet, reprioritizing as I go.

But I’m a pack rat, so I keep a lot of my old TODO lists, organized in a file. Sometimes the TODO sheets get saved for other reasons – for example, the sheets are good headers for stacks of papers and notes related to a project. As projects get completed, I come across these old sheets, and have the opportunity to review what I once thought I had to do.

And you know what? Most of the things that you think you need to do are completely worthless. They’re ideas that have relevance at the time, that may seem pressing at the time, but are really cover-your-ass responses to possibilities that never came to pass. The situation loomed, came, and then passed you by … and should take your TODOs with it.


I’m not saying you shouldn’t have things on your TODO list. I’m planning my 2013 right now. And I’m not saying you should give yourself a pass on obligations you’ve incurred to others. But I am saying you don’t need to maintain every commitments you’ve ever made to yourself, especially those that came in the form of a TODO list item or a personal challenge.

As an example, a thing I do is take pictures of food and post it to my Google+ stream. Originally I was doing this as preparation for doing restaurant reviews, but I found I actually like the images of food more than I wanted to spend time writing reviews, especially since I have so much more writing to do. But when I get busy, I’ll take more pictures than I post. I get a backlog.

So how much effort should I take going back to post the pictures? None is one good answer, but that begs the question to be asked: why are you taking the pictures in the first place? Periodically is another good answer, but it’s actually difficult to figure out what I’ve posted and what I haven’t. So hunting through my image feeds can become its own form of archaeology.


But you know what? The world won’t come to an end if I don’t post every picture I’ve ever taken of one of my favorite dishes at my favorite restaurants. If you’re not obsessive-compulsive, you may not understand this, but the thought of something you said you were going to do that isn’t getting done is an awful torment to those of us who are.

That’s where a mulligan comes in. In the competitive collectible card game Magic: The Gathering, players compose decks of cards which they use in duels with other players – but no matter how well a player has prepared his or her deck of cards, success in depends a good initial hand of cards. The best deck in the world can be useless if you draw seven “lands” – or none.

So the game allows you to “mulligan” – to discard that initial hand and re-draw with one less card. That’s a slight disadvantage, but a hand with no “lands” is useless – you can’t do anything on the first round, and your opponent will clean your clock. Better to have a balanced hand of six cards than seven you can’t do anything with at all. Better to have at least a chance to win.


So that’s my gift to you all this New Year’s Eve: declare yourself a mulligan. Maybe the turn of the seasons are just a notch on the clock, but use this passage as a point of inspiration. It’s a new year, a new day, the starting point of a new path. Remind yourself of your real goals, and throw away any out of date TODOs and collected personal obligations that are holding you back.

Hug your wife, pay your bills, feed your cats. Write the software that pays the bills, and the books that you plan to do.

But don’t let yourself get held back something you wrote a year ago on a piece of paper.

Not for one minute.


If you let yourself, the sky is your limit.

-the Centaur

A Really Good Question


Recently I was driving to work and thinking about an essay by a statistician on “dropping the stick.” The metaphor was about a game of pick-up hockey, where an inattentive player would be asked to “drop the stick” and skate for a while until they got their head in the game. In the statistical context, this became the action of stopping people who were asking for help with a specific statistical task and asking what problem they wanted to solve, because often solving the actual problem may be actually very different from fixing their technical issue and may require completely different approaches. That gets annoying sometimes when you ask a question to a mailing list and someone asks you what you’re trying to solve rather than addressing the issue you’ve raised, but it’s a good reflex to have: first ask, “What’s the problem?”

Then I realized something even more important about projects that succeeded or failed in my life – successes at radical off the wall projects like the emotional robot pet project or the cell phone robots with personalities project or the 3d object visualization project, and failures at seemingly simpler problems like a tweak to a planner at Carnegie Mellon or a test domain for my thesis project or the failed search improvement I worked on during my third year at the Search Engine that Starts with a G. One of the things I noticed about the successes is that before I got started I did a hard core intensive research effort to understand the problem space before I tackled the problem proper, then I chose a method of approach, and then I planned out a solution. Paraphrasing Eisenhower, even though the plan often had to change once we started execution, the planning was indispensable. The day-to-day immersion in the problem that you need for planning provides the mental context you need to make the right decisions as the situation inevitably changes.

In failed projects, I found one or more things – the hard core research or the planning – wasn’t present, but that wasn’t all that was missing. In the failure cases, I often didn’t know what a solution would look like. I recently saw this from the outside when I conducted a job interview, and found that the interviewee clearly didn’t understand what would constitute an answer to my question. He had knowledge, and he was trying, but his suggested moves were only analogically correct – they sounded like elements of a solution, but didn’t connect to the actual features of the problem. Thinking back, a case that leapt to mind from my own experience was a project all the way back in grade school, where I we had an urban planning exercise to create an ideal city. My job was to create the map of the city, and I took the problem very literally, starting with a topographical map of the city’s center, river and hills. Now, it’s true that the geography of a city is important – for an ideal city, you’d want a source of water, easy transport, a relatively flat area for many buildings, and at least one high point for scenic vistas. But there was one big problem with my city plan: there were no buildings, neighborhoods, or districts on it! No buildings or people! It was just the land!

Ok, so I was in grade school, and this was one of my first projects, so perhaps I could be excused for not knowing what I was doing. But the educators who set up this project knew what they were doing, and they brought on board an actual city planner to talk to us about our project. When he saw my maps, he pointed out this wasn’t a city plan and sat down with all of us to brainstorm what we’d actually want in a city – neighborhoods, power plants, a city center, museums, libraries, hospitals, food distribution and industrial regions. At the time, I was saddened that my hard work was abandoned, and now in hindsight I’m saddened that the city planner didn’t take a minute or two to talk about how geography affects cities before beginning his brainstorming exercise. But what struck me most about this in hindsight is that I really didn’t know what constituted an answer to the problem.


So, I asked myself, “What counts as a solution to this problem?” – and that, I realized, is a very good question.

-the Centaur

Pictured: an overhead shot of a diorama of the control room of the ENIAC computer as seen at the Computer History Museum, and of course our friend Clarence having his sudden moment of clarity.

On John Scalzi On Writing for Free


John Scalzi recently complained about people asking him to write for free. His article’s funny and reveals a lot about the writing industry many people who aren’t in the industry don’t know. Then I realized:

If someone asks you to write for free, that means they think writing is worth nothing.

(Well, maybe they’re broke, or ignorant, or something similar. But generally that’s clear from context.)

Today’s moment of clarity was brought to you by Whatever.

-the Centaur

Pictured: “Sudden Clarity Clarence,” a young man who’s just realized he’s spawned an internet meme.