Press "Enter" to skip to content

Posts tagged as “The Exploration of Intelligence”

[twenty twenty-four post one hundred]: trial runs

taidoka 0

Still hanging in there apparently - we made it to 100 blogposts this year without incidents. Taking care of some bidness today, please enjoy this preview of the t-shirts for the Embodied Artificial Intelligence Workshop. Still trying out suppliers - the printing on this one came out grey rather than white.

Perhaps we should go whole hog and use the logo for the workshop proper, which came out rather nice.

-the Centaur

Picture: Um, I said it, a prototype t-shirt for EAI#5, and the logo for EAI#5.

[twenty twenty-four day ninety-four]: to choke a horse

centaur 0

What you see there is ONE issue of the journal IEEE Transactions on Intelligent Vehicles. This single issue is two volumes, over two hundred articles, comprising three THOUSAND pages.

I haven't read the issue - it came in the mailbox today - so I can't vouch for the quality of the articles. But, according to the overview article, their acceptance rate is down near 10%, which is pretty selective.

Even that being said, two hundred articles seems excessive. I don't see how this is serving the community; you can't read two hundred papers, nor skim two hundred abstracts to see what's relevant - at least, not in a timely fashion. Heck, you can't even fully search that, as some articles might use different terminology for the same thing (e.g., "multi-goal reinforcement learning" for "goal-conditioned reinforcement learning" or even "universal value function approximators" for essentially the same concept).

And the survey paper itself needs a little editing. The title appears to be a bit of a word salad, and the first bullet point duplicates words ("We have received 4,726 submissions have received last year.") I just went over one of my own papers with a colleague, and we found similar errors, so I don't want to sound too harsh, but I still think this needed a round of copyedits - and perhaps needs to be forked into several more specialized journals.

Or ... hey ... it DID arrive on April 1st. You don't think ...

-the Centaur

Pictured: the very real horse-choking tome that is the two volumes of the January 2024 edition of TIV, which is, as far as I can determine, not actually an April Fool's prank, but just a journal that is fricking huge.

Announcing the 5th Annual Embodied AI Workshop

centaur 0

Thank goodness! At last, I'm happy to announce the Fifth Annual Embodied AI Workshop, held this year in Seattle as part of CVPR 2024! This workshop brings together vision researchers and roboticists to explore how having a body affects the problems you need to solve with your mind.

This year's workshop theme is "Open-World Embodied AI" - embodied AI when you cannot fully specify the tasks or their targets at the start of your problem. We have three subthemes:

  • Embodied Mobile Manipulation: Going beyond our traditional manipulation and navigation challenges, this topic focuses on moving objects through space at the same time as moving yourself.
  • Generative AI for Embodied AI: Building datasets for embodied AI is challenging, but we've made a lot of progress using "synthetic" data to expand these datasets.
  • Language Model Planning: Lastly but not leastly, a topic near and dear to my heart: using large language models as a core technology for planning with robotic systems.

The workshop will have six speakers and presentations from six challenges, and perhaps a sponsor or two. Please come join us at CVPR, though we also plan to support hybrid attendance.

Presumably, the workshop location will look something like the above, so we hope to see you there!

-the Centaur

Pictured: the banner for EAI#5, partially done with generative AI guided by my colleague Claudia Perez D'Arpino and Photoshoppery done by me. Also, last year's workshop entrance.

[twenty twenty-four day sixty-one]: the downside is …

centaur 0

... these things take time.

Now that I’m an independent consultant, I have to track my hours - and if you work with a lot of collaborators on a lot of projects like I do, it doesn’t do you much good to only track your billable hours for your clients, because you need to know how much time you spend on time tracking, taxes, your research, conference organization, writing, doing the fricking laundry, and so on.

So, when I decided to start being hard on myself with cleaning up messes as-I-go so I won’t get stressed out when they all start to pile up, I didn’t stop time tracking. And I found that some tasks that I thought took half an hour (blogging every day) took something more like an hour, and some that I thought took only ten minutes (going through the latest bills and such) also took half an hour to an hour.

We’re not realistic about time. We can’t be, not just as humans, but as agents: in an uncertain world where we don’t know how much things will cost, planning CANNOT be performed correctly unless we consistently UNDER-estimate the cost or time that plans will take - what’s called an “admissible heuristic” in artificial intelligence planning language. Overestimation leads us to avoid choices that could be the right answers.

So we “need” to lie to ourselves, a little bit, about how hard things are.

But it still sucks when we find out that they are pretty fricking hard.

-the Centaur

P.S. This post, and some of the associated research and image harvesting, I expected to take 5 minutes. It took about fifteen. GO figure. Pictured: the "readings" shelves, back from the days when to get a bunch of papers on something you had to go to the library and photocopy them, or buy a big old book called "Readings in X" and hope it was current enough and comprehensive enough to have the articles you needed - or to attend the conferences themselves and hope you found the gold among all the rocks.

[twenty twenty-four day nineteen]: our precious emotions

centaur 0

It's hard to believe nowadays, but the study of psychology for much of the twentieth century was literally delusional. The first half was dominated by behaviorism, a bad-faith philosophy of psychology - let's not stoop to calling it science - which denied the existence of internal mental states. Since virtually everyone has inner mental life, and it's trivial to design an experiment which relies on internal mental reasoning to produce outcomes, it's almost inconceivable that behaviorism lasted as long as it did; but, it nevertheless contributed a great understanding of stimulus-response relationships to our scientific knowledge. That didn't mean it wasn't wrong, and by the late twentieth century, it had been definitively refuted by cognitive architecture studies which modeled internal mental behavior in enough detail to predict what brain structures were involved with different reasoning phenomena - structures later detected in brain scans.

Cognitive science had its own limits: while researchers such as myself grew up with a very broad definition of cognition as "the processes that the brain does when acting intelligently," many earlier researchers understood the "cognitive" in "cognitive psychology" to mean "logical reasoning". Emotion was not a topic which was well understood, or even well studied, or even thought of as a topic of study: as best I can reconstruct it, the reasoning - such as it was - seems to have been that since emotions are inherently subjective - related to a single subject - then the study of emotions would also be subjective. I hope you can see that this is just foolish: there are many things that are inherently subjective, such as what an individual subject remembers, which nonetheless can be objectively studied across many individual subjects, to illuminate solid laws like the laws of recency, primacy, and anchoring.

Now, in the twenty-first century, memory, emotion and consciousness are all active areas of research, and many researchers argue that without emotions we can't reason properly at all, because we become unable to adequately weigh alternatives. But beyond the value contributed by those specific scientific findings is something more important: the general scientific understanding that our inner mental lives are real, that our feelings are important, and that our lives are generally better when we have an affective response to the things that happen to us - in short, that our emotions are what make life worth living.

-the Centaur

[drawing every day 2024 post ten]: moar hands

centaur 0

Still working through the Goldman book, which has the inspirational quote: "I hope you wear this book out from overuse!" And that's what you need when you're practicing!

-the Centaur

P.S. My wife and I were talking about learning skills, and she complained that she hadn't quite gotten what she wanted to out of a recent set of books. It occurred to me that there are two situations in which reading books about a skill doesn't help you:

  • It can be you haven't yet found the right book, course or teacher that breaks it down in the right way (for me in music, for example, it was "Understanding the Fundamentals of Music" which finally helped me understand the harmonic progression, the circle of fifths, and scales, and even then I had to read it twice).
  • It can be because you're not doing enough of the thing to know the right questions to ask, which means you may not recognize the answers when they're given to you.

Both of these are related to Vygotsky's Zone of Proximal Development - you can most easily learn things that are related to what you already know. Without a body of practice at a skill, reading up on it can sometimes turn into armchair quarterbacking and doesn't help you (and can sometimes even hurt you); with a body of practice, it turns into something closer to an athlete watching game footage to improve their own game.

So! Onward with the drawing. Hopefully some of the drawing theory will stick this time.

[twenty twenty four day four]: there’s some problems on that boat

centaur 0

Okay, I'm going to start out with the best of the images that I produced trying to create Porsche the Centaur using ChatGPT's DALL-E interface. The above is ... almost Porsche, though her ears are too high (centaurs in the Alliance universe have ears a little more like an elf, but mobile like a dog's). And, after some coaxing, the ChatGPT / DALL-E hybrid managed to produce a halfway decent character sheet:

But both of these images came after several tries. And when I tried to get ChatGPT / DALL-E to generate a front and back view of the same character sheet, it just disintegrated into random horse and human parts:

Similarly, the initial centaur image came only after many prompt tweaks and false starts, like this one:

There are legitimate questions about whether the current round of AI art generators were trained on data taken without permission (they almost certainly were), whether they could displace human artists (they almost certainly will), and whether they will have destructive effects on human creativity (the jury is out on this one, as some forms of art will wane while new forms of art will wax).

But never let anyone tell you they've worked out all the bugs yet. These systems are great renderers at the image patch level, but their notion of coherence leaves a lot to be desired, and their lack of structural knowledge means their ability to creatively combine is radically limited to surface stylistics.

One day we'll get there. But it will take a lot of work.

-the Centaur

Too much to keep up with

centaur 0

When I was a kid, I read an article by Isaac Asimov complaining that the pace of scientific publication had become so great that he couldn't possibly keep up. When I was an adult, I realized that the end of the article - in which he claimed that if you heard panting behind his office door it was because he was out of breath from trying to read the scientific literature - was a veiled reference to masturbation. Yep, Isaac is the Grand Dirty Old Man of science fiction, and, man, we love you, but, damn, sometimes, you needed a filter.

Well, the future is now, and the story is repeating itself - sans Isaac's ending; my regular fiction is a touch blue so there's no need for my blog to get prurient. I'm a robotics researcher turned consultant, focusing on, among a kazillion other things, language model planning - robots using tools like ChatGPT to write their own programs. As part of this, I'm doing research - market research on AI and robotics, general research on the politics of AI, and technical research on language models in robotics.

A good buddy from grad school is now a professor, and he and I have restarted a project from the 90's on using stories to solve problems (the Captain's Advisory Tool, using Star Trek synopses as a case-base, no joke). And we were discussing this problem: he's complaining that the pace of research has picked up to the point where he can no longer keep up with the literature. So it isn't just me.

But the best story yet on how fast things are changing? Earlier this month, I was going through some articles on large language models my research - and a new announcement came out while I was still reading the articles I had just collected that morning.

Singularity, here we come.

-the Centaur

[forty-seven] minus twenty-one: i hear there’s a new ai hotness

centaur 0

SO automatic image generation is a controversial thing I think about a lot. Perhaps I should comment on it sometime. Regardless, I thought I'd show off the challenges that come from using this technology using a simple example. If you recall, I did a recent post with a warped bookstore picture, and attempted to regenerate it using generative AI with Midjourney. Unfortunately, the prompt

a magical three-dimensional impossible bookstore in the style of M.C. Escher

me

failed to pick up the image for some reason. After a few iterations with the Midjourney Discord interface, I got the very nice, but nonsensical and generic, AI generated image you see up top. After playing around with the API, I realized that I likely had formulated my prompt wrong, and tried again to include this image:

On the second pass, I got another, more on-point, yet still nonsensical image as you see below:

These systems do LOOK impressive. But they work like ... amateurs who've learned to render well. They can produce things that are cool, but it's very hard to make them produce something on point.

And this is above and beyond the massive copyright issues that arise from a system that regurgitates other people's copyrighted art, much less the impact on jobs, much less the impact on the human soul.

-the Centaur

do, or do not. there is no blog

centaur 0

One reason blogging suffers for me is that I always prioritize doing over blogging. That sounds cool and all, but it's actually just another excuse. There's always something more important than doing your laundry ... until you run out of underwear. Blogging has no such hard failure mode, so it's even easier to fall out of the habit. But the reality is, just like laundry, if you set aside a little time for it, you can stay ahead - and you'll feel much healthier and more comfortable if you do.

-the Centaur

Pictured: "Now That's A Steak Burger", a 1-pound monster from Willard Hicks, where I took a break from my million other tasks to catch up on Plans and the Structure of Behavior, the book that introduced idea of the test-operate-test-exit (TOTE) loop as a means for organizing behavior, a device I'm finding useful as I delve into the new field of large language model planning.

Ugh, WordPress updates edition …

centaur 0

... the block editor of Wordpress seems to be making my old non-block-editor posts turn into solid walls of text. See the post "Pascal's Wager and Purchasing Parsley":

Yeah, it's not supposed to be looking like that. Gotta track those down and fix them.

In other news, my Half-Cheetah policy is successfully training to "expected" levels of performance. Yay! I guess that means my code for the assignment is ... sorta correct? Time to clean it up and submit it.

-the Centaur

What is “Understanding”?

taidoka 1
When I was growing up - or at least when I was a young graduate student in a Schankian research lab - we were all focused on understanding: what did it mean, scientifically speaking, for a person to understand something, and could that be recreated on a computer? We all sort of knew it was what we'd call nowadays an ill-posed problem, but we had a good operational definition, or at least an operational counterexample: if a computer read a story and could not answer the questions that a typical human being could answer about that story, it didn't understand it at all. But there are at least two ways to define a word. What I'll call a practical definition is what a semanticist might call the denotation of a word: a narrow definition, one which you might find in a dictionary, which clearly specifies the meaning of the concept, like a bachelor being an unmarried man. What I'll call a philosophical definition, the connotations of a word, are the vast web of meanings around the core concept, the source of the fine sense of unrightness that one gets from describing Pope Francis as a bachelor, the nuances of meaning embedded in words that Socrates spent his time pulling out of people, before they went and killed him for being annoying. It's those connotations of "understanding" that made all us Schankians very leery of saying our computer programs fully "understood" anything, even as we were pursuing computer understanding as our primary research goal. I care a lot about understanding, deep understanding, because, frankly, I cannot effectively do my job of teaching robots to learn if I do not deeply understand robots, learning, computers, the machinery surrounding them, and the problem I want to solve; when I do not understand all of these things, I stumble in the dark, I make mistakes, and end up sad. And it's pursuing a deeper understanding about deep learning where I got a deeper insight into deep understanding. I was "deep reading" the Deep Learning book (a practice in which I read, or re-read, a book I've read, working out all the equations in advance before reading the derivations), in particular section 5.8.1 on Principal Components Analysis, and the authors made the same comment I'd just seen in the Hands-On Machine Learning book: "the mean of the samples must be zero prior to applying PCA." Wait, what? Why? I mean, thank you for telling me, I'll be sure to do that, but, like ... why? I didn't follow up on that question right away, because the authors also tossed off an offhand comment like, "XX is the unbiased sample covariance matrix associated with a sample x" and I'm like, what the hell, where did that come from? I had recently read the section on variance and covariance but had no idea why this would be associated with the transpose of the design matrix X multiplied by X itself. (In case you're new to machine learning, if x stands for an example input to a problem, say a list of the pixels of an image represented as a column of numbers, then the design matrix X is all the examples you have, but each example listed as a row. Perfectly not confusing? Great!) So, since I didn't understand why Var[x] = XX, I set out to prove it myself. (Carpenters say, measure twice, cut once, but they'd better have a heck of a lot of measuring and cutting under their belts - moreso, they'd better know when to cut and measure before they start working on your back porch, or you and they will have a bad time. Same with trying to teach robots to learn: it's more than just practice; if you don't know why something works, it will come back to bite you, sooner or later, so, dig in until you get it). And I quickly found that the "covariance matrix of a variable x" was a thing, and quickly started to intuit that the matrix multiplication would produce it. This is what I'd call surface level understanding: going forward from the definitions to obvious conclusions. I knew the definition of matrix multiplication, and I'd just re-read the definition of covariance matrices, so I could see these would fit together. But as I dug into the problem, it struck me: true understanding is more than just going forward from what you know: "The brain does much more than just recollect; it inter-compares, it synthesizes, it analyzes, it generates abstractions" - thank you, Carl Sagan. But this kind of understanding is a vast, ill-posed problem - meaning, a problem without a unique and unambiguous solution. But as I was continuing to dig through the problem, reading through the sections I'd just read on "sample estimators," I had a revelation. (Another aside: "sample estimators" use the data you have to predict data you don't, like estimating the height of males in North America from a random sample of guys across the country; "unbiased estimators" may be wrong but their errors are grouped around the true value). The formula for the unbiased sample estimator for the variance actually doesn't look quite the matrix transpose - but it depends on the unbiased estimator of sample mean. Suddenly, I felt that I understood why PCA data had to have a mean of 0. Not driving forward from known facts and connecting their inevitable conclusions, but driving backwards from known facts to hypothesize a connection which I could explore and see. I even briefly wrote a draft of the ideas behind this essay - then set out to prove what I thought I'd seen. Setting the mean of the samples to zero made the sample mean drop out of sample variance - and then the matrix multiplication formula dropped out. Then I knew I understood why PCA data had to have a mean of 0 - or how to rework PCA to deal with data which had a nonzero mean. This I'd call deep understanding: reasoning backwards from what we know to provide reasons for why things are the way they are. A recent book on science I read said that some regularities, like the length of the day, may be predictive, but other regularities, like the tides, cry out for explanation. And once you understand Newton's laws of motion and gravitation, the mystery of the tides is readily solved - the answer falls out of inertia, angular momentum, and gravitational gradients. With apologies to Larry Niven, of course a species that understands gravity will be able to predict tides. The brain does do more than just remember and predict to guide our next actions: it builds structures that help us understand the world on a deeper level, teasing out rules and regularities that help us not just plan, but strategize. Detective Benoit Blanc from the movie Knives Out claimed to "anticipate the terminus of gravity's rainbow" to help him solve crimes; realizing how gravity makes projectiles arc, using that to understand why the trajectory must be the observed parabola, and strolling to the target. So I'd argue that true understanding is not just forward-deriving inferences from known rules, but also backward-deriving causes that can explain behavior. And this means computing the inverse of whatever forward prediction matrix you have, which is a more difficult and challenging problem, because that matrix may have a well-defined inverse. So true understanding is indeed a deep and interesting problem! But, even if we teach our computers to understand this way ... I suspect that this won't exhaust what we need to understand about understanding. For example: the dictionary definitions I've looked up don't mention it, but the idea of seeking a root cause seems embedded in the word "under - standing" itself ... which makes me suspect that the other half of the word, standing, itself might hint at the stability, the reliability of the inferences we need to be able to make to truly understand anything. I don't think we've reached that level of understanding of understanding yet. -the Centaur Pictured: Me working on a problem in a bookstore. Probably not this one.

Information Hygiene

centaur 0

Our world is big. Big, and complicated, filled with many more things than any one person can know. We rely on each other to find out things beyond our individual capacities and to share them so we can succeed as a species: there's water over the next hill, hard red berries are poisonous, and the man in the trading village called Honest Sam is not to be trusted.

To survive, we must constantly take information, just as we must eat to live. But just like eating, consuming information indiscriminately can make us sick. Even when we eat good food, we must clean our teeth and got to the bathroom - and bad food should be avoided. In the same way, we have to digest information to make it useful, we need to discard information that's no longer relevant, and we need to avoid misinformation so we don't pick up false beliefs. We need habits of information hygiene.

Whenever you listen to someone, you absorb some of their thought process and make it your own. You can't help it: that the purpose of language, and that's what understanding someone means. The downside is your brain is a mess of different overlapping modules all working together, and not all of them can distinguish between what's logically true and false. This means learning about the beliefs of someone you violently disagree with can make you start to believe in them, even if you consciously think they're wrong. One acquaintance I knew started studying a religion with the intent of exposing it. He thought it was a cult, and his opinion about that never changed. But at one point, he found himself starting to believe what he read, even though, then and now, he found their beliefs logically ridiculous.

This doesn't mean we need to shut out information from people we disagree with - but it does mean we can't uncritically accept information from people we agree with. You are the easiest person for yourself to fool: we have a cognitive flaw called confirmation bias which makes us more willing to accept information that confirms our prior beliefs rather than ones that deny it. Another flaw called cognitive dissonance makes us want to actively resolve conflicts between our beliefs and new information, leading to a rush of relief when they are reconciled; combined with confirmation bias, people's beliefs can actually be strengthened by contradictory information.

So, as an exercise in information hygiene for those involved in one of those charged political conversations that dominate our modern landscape, try this. Take one piece of information that you've gotten from a trusted source, and ask yourself: how might this be wrong? Take one piece of information from an untrusted source, and ask yourself, how might this be right? Then take it one step further: research those chinks in your armor, or those sparks of light in your opponent's darkness, and see if you can find evidence pro or con. Try to keep an open mind: no-one's asking you to actually change your mind, just to see if you can tell whether the situation is actually as black and white as you thought.

-the Centaur

Pictured: the book pile, containing some books I'm reading to answer a skeptical friend's questions, and other books for my own interest.

Now I Know the Problem

centaur 0

20161115_165106.jpg

Hoisted from Facebook … what’s the biggest problem with the world today?

First I studied logic, and found out many people don’t know how to construct an argument, and I thought that was the biggest problem.

Then I studied emotion, and found out many people judge arguments to be correct if they make them feel good, and I thought that was the biggest problem.

Then I studied consciousness, and found out many people don’t argue at all, they post-hoc justify preconscious decisions, and then I thought that was the biggest problem.

Then I studied politics, and I realized the biggest problem was my political opponents, because they don’t agree with me!

-the Centaur

Pictured: Me banging on a perfectly good piece of steel until it becomes useless.

I just think they don’t want AI to happen

centaur 2
Hoisted from Facebook: I saw my friend Jim Davies share the following article:
http://www.theguardian.com/commentisfree/2016/mar/13/artificial-intelligence-robots-ethics-human-control The momentous advance in artificial intelligence demands a new set of ethics ... In a dramatic man versus machine encounter, AlphaGo has secured its third, decisive victory against a renowned Go player. With scientists amazed at how fast AI is developing, it’s vital that humans stay in control.
I posted: "The AI researchers I know talk about ethics and implications all the time - that's why I get scared about every new call for new ethics after every predictable incremental advance." I mean, Jim and I have talked about this, at length; so did my I and my old boss, James Kuffner ... heck, one of my best friends, Gordon Shippey, went round and round on this over two decades ago in grad school. Issues like killbots, all the things you could do with the 99% of a killbot that's not lethal, the displacement of human jobs, the potential for new industry, the ethics of sentient robots, the ethics of transhuman uplift, and whether any of these things are possible ... we talk about it a lot. So if we've been building towards this for a while, and talking about ethics the whole time, where's the need for a "new" ethics, except in the minds of people not paying attention? But my friend David Colby raised the following point: "I'm no scientist, but it seems to me that anyone who doesn't figure out how to make an ethical A.I before they make an A.I is just asking for trouble." Okay, okay, so I admit it: my old professor Ron Arkin's book on the ethics of autonomous machines in warfare is lower in my stack than the book I'm reading on reinforcement learning ... but it's literally in my stack, and I think about this all the time ... and the people I work with think about this all the time ... and talk about it all the time ... so where is this coming from? I feel like there's something else beneath the surface. Since David and I are space buffs, my response to him was that I read all these stories about the new dangers of AI as if they said:
With the unexpected and alarming success of the recent commercial space launch, it's time for a new science of safety for space systems. What we need is a sober look at the risks. After all, on a mission to Mars, a space capsule might lose pressure. Before we move large proportions of the human race to space, we need to, as a society, look at the potential catastrophes that might ensue, and decide whether this is what we want our species to be doing. That's why, at The Future of Life on Earth Institute, we've assembled the best minds who don't work directly in the field to assess the real dangers and dubious benefits of space travel, because clearly the researchers who work in the area are so caught up with enthusiasm that they're not seriously considering the serious risks. Seriously. Sober. Can we ban it now? I just watched Gravity and I am really scared after clenching my sphincter for the last ninety minutes.
To make that story more clear if you aren't a space buff: there are more commercial space endeavors out there than you can shake a stick at, so advances in commercial space travel should not be a surprise - and the risks outlined above, like decompression, are well known and well discussed. Some of us involved in space also talk about these issues all the time. My friend David has actually written a book about space disasters, DEBRIS DREAMS, which you can get on Amazon. So to make the analogy more clear, there are more research teams working on almost every possible AI problem that you can think of, so advances in artificial intelligence applications should not be a surprise - and the risks outlined by most of these articles are well known and discussed. In my personal experience - my literal personal experience - issues like safety in robotic systems, whether to trust machine decisions over human judgment, and the potential for disruption of human jobs or even life are all discussed more frequently, and with more maturity, than I see in all these "sober calls" for "clear-minded" research from people who wouldn't know a laser safety curtain from an orbital laser platform. I just get this sneaking suspicion they don't want AI to happen. -the Centaur

Why yes, I’m running a deep learning system on a MacBook Air. Why?

centaur 1
deeplearning.png Yep, that’s Python consuming almost 300% of my CPU - guess what, I guess that means this machine has four processing cores, since I saw it hit over 300% - running the TensorFlow tutorial. For those that don’t know, "deep learning” is a relatively recent type of learning which uses improvements in both processing power and learning algorithms to train learning networks that can have dozens or hundreds of layers - sometimes as many layers as neural networks in the 1980’s and 1990’s had nodes. For those that don’t know even that, neural networks are graphs of simple nodes that mimic brain structures, and you can train them with data that contains both the question and the answer. With enough internal layers, neural networks can learn almost anything, but they require a lot of training data and a lot of computing power. Well, now we’ve got lots and lots of data, and with more computing power, you’d expect we’d be able to train larger networks - but the first real trick was discovering mathematical tricks that keep the learning signal strong deep, deep within the networks. The second real trick was wrapping all this amazing code in a clean software architecture that enables anyone to run the software anywhere. TensorFlow is one of the most recent of these frameworks - it’s Google’s attempt to package up the deep learning technology it uses internally so that everyone in the world can use it - and it’s open source, so you can download and install it on most computers and try out the tutorial at home. The CPU-baking example you see running here, however, is not the simpler tutorial, but a test program that runs a full deep neural network. Let’s see how it did: Screenshot 2016-02-08 21.08.40.png Well. 99.2% correct, it seems. Not bad for a couple hundred lines of code, half of which is loading the test data - and yeah, that program depends on 200+ files worth of Python that the TensorFlow installation loaded onto my MacBook Air, not to mention all the libraries that the TensorFlow Python installation depends on in turn … But I still loaded it onto a MacBook Air, and it ran perfectly. Amazing what you can do with computers these days. -the Centaur

I don’t read patents

centaur 0
big red stop button for a robot, i think from bosch A friend recently overheard someone taking trash about how big companies were kowtowing to them because of a patent they had - and the friend asked me about it. Without knowing anything about the patent, it certainly does sound plausible someone would cut deals over an awarded patent - once a patent is awarded it's hard to get rid of. But I couldn't be of more help to them, because I couldn't read the patent. As a working engineer (and, briefly, former IP lead for an AI company) I've had to adopt a strict policy to not read patents. The reason is simple - if you as an engineer look at a patent and decide that it doesn't apply to you, and a court later decides that you're wrong, the act of looking at the patent will be considered to be evidence of willful patent infringement and will result in treble damages. In case you're wondering, this isn't just me - most IP guys will tell you, if you are an engineer do NOT look at patents prior to doing your work - do what you need to do, apply for patent protection for what you're doing that you think is new, useful and non-obvious, and let the lawyers sort out the rest - if it ever comes up, which usually it won't. Not everyone agrees and it really applies less to indie developers and open source projects than it does to people working at big companies with deep pockets likely to get sued. Unfortunately I work at a big company with deep pockets likely to get sued, so I don't look at patents. Don't send them to me, don't tell me about them, and if, God forbid, you think I or someone I know is violating a patent you hold, I'll find the number of our legal counsel, and they'll assign someone to evaluate the claim who specializes in that kind of thing. Hate the damn things. -the Centaur Pictured: a big red stop button for a robot, I think from one at Bosch.