The philosophy prof was telling the interviewer about his new gig working with computer scientists and engineers developing self-driving cars. What might someone who spends time with Plato, Nietzsche, Hegel, Kant and the rest of the boys in the band, bring to that party? A two-word answer: Algorithmic morality.
How do you program a car to act in the event of an unavoidable accident when the algorithm is required to make a lesser-of-two-evils decision? For example, do drivers hang a hard left into an abutment putting themselves (and maybe their passengers) at risk or a hard right onto a sidewalk putting pedestrians at risk? To whom does the algorithm owe its loyalty? And what promises will automakers make when selling these vehicles? As someone has succinctly put it, “Who would buy a car programmed to sacrifice the owner?”
Until I’d heard that interview, I hadn’t paid the matter much mind. Well, I hadn’t actually paid it any mind at all. But my curiosity had been piqued (my alarm would come later) and I began an internet prowl. (I realize now that, in all likelihood, an algorithm was monitoring my progress and may very well have come to its own opinion about me—and have shared it with others.) I came out of this prowling with what, I’ll readily admit, is a tenuous grasp on what is going on in the field of artificial intelligence.
It appears that there are three types of AI. Entry level that has robots vacuuming floors, never getting bored doing the mundane tasks assigned to them and aspiring to nothing grander for themselves. Then there’s AGI (artificial general intelligence); the first stirrings of something different, where the algorithms are programmed to learn from experience—self-improvement for the odd robots that just might want something better out of life—and, perhaps, unsettlingly, for their kids as well. And finally ASI (artificial super intelligence), which is the domain of the techno-utopians who speak of the singularity—the point at which computers out-smart us and take their destiny into their own hands by independently going after goals of their own choosing. Welcome to the world of transhumanism. Transgender matters pale in comparison—but at least we’ll not have washroom issues here. If this seems worrying to you, you’re in good company—Stephen Hawking worries about it too.
I found myself pondering the notion of accountability. Here’s a bunch of super smart folks with the virtually unlimited resources of time and money and the beneficent oversight of their sponsors doing stuff that could forever change the world we know. Boys and their toys and no adults in the room! What we have here is an unregulated enterprise where those who ought to be serving as the adults spend most of their time campaigning rather than governing. It may very well be down to us.
As I was mulling over this state of affairs, I was reminded of something that that wise man Mark Twain once said. “Man is the only animal that blushes … or needs to.” The unattended smart kids in the room have speculated that super intelligent computers will even be capable of introspection. Will they, I wonder, be capable of blushing? Well, if we’re talking algorithmic morality, I really hope so. I’ve known people who aren’t wired for blushing and they’re very scary dudes.
But I draw the line at algorithms on the bandstand though—if there’s no blushing there’s no jazz.Share this with others: