Does not compute: Robograders, better AI, and code playing games

One of the reasons I love the ideas behind The New Aesthetic is that it takes a humanistic approach to digitally produced artifacts in physical space. It’s the realization that machines are leaving all these marks on what we consider our reality and that there is a kind of art to it. Unrecognizable to them, we see patterns in their glitches that they are “blind to” and which we, as humans, can see and make sense of in a way.

The extension of this is giving up the notion that machines will solve everything. That code can be human.

There are different structures at work, a fundamental basis of understanding a reality that is not equal between the physical and digital domains. We can map between them, but they are not the same space. Code is different. It’s materiality without physciality.

This is not a pessimistic view, by the way, but an optimistic look at the future. Because they are different, it allows a look at humanity that is not possible from within ourselves. When we see through the eyes of a machine, we are looking at a reality unfiltered by our contexts or culture. There is a perverse purity there that we might never be capable of ourselves.

However, with that type of sight, we also come to realize that certain things are simply not possible for code. The linguistic looseness that allows us to see patterns in the clouds and dream of impossible constructions is not achievable in code. It cannot understand the juiciness of post-structuralist thought: the meaning behind the meaning.

That last thought came as part of extended — I love having a back and forth with people who are willing to write back — discussion with Dr. Laura Gibbs (out of University of Oklahoma) on Google+ about a week ago. In writing about robograders, software that will supposedly read and grade papers for professors, I expressed that it was a fool’s errand to chase such a dream. You can write software to grade quantitative things, sure, but qualitative analysis of a text? Nope.

Which brought me, tonight, to “Chaining Together the Player and Character” and the wish of Caitlin Oram for games where the player and the character aren’t, well, chained together. It’s something I wish for too, however, I also don’t expect it to happen any time soon  Not because it is impossible — well, there is a discussion about if plot is possible without juxtaposition — but more because commercial games are products (commodities) that are finite.

One of reasons Minecraft is so compelling to us is that not only can we change the world, but it grows with us (up to a very large limit). It can present the illusion of infinity and and ever-expanding world for us and others to change to on a whim. There isn’t a mission or a global threat. No overt tension exists to drive us forward — we invent our own.

However, it’s also the reason smarter AI don’t exist in a game series like Mass Effect. The order of complexity is too big. It would need too much data, assets, and writing. If a universe that size moved in nearly real-time (as opposed to player-time), it would reduce Shepard to a meaningless soldier. She cannot fulfill some grand purpose if the threat moved without her moving first.

Writing about game AI also brings me to some code “solving” Super Mario Bros from yesterday. It’s an interesting look at breaking down the complexity of “playing” a game like Super Mario Bros using Lexicographic Orderings (statistical analysis plus sorting algorithms). But after watching some of the initial success, and then some hilarious failures, I was brought back to the point of this loosely connected essay: code is only as smart as we think it is.

It’s often the worship of code as this ever-elusive cure-all that causes us so much stress. We want it to be human, forgetting sometimes that it cannot be. It isn’t some savior locked away to be unleashed, merely tools and augments to our reality. It’s another set of eyes and some extra senses. It’s not another person.