Tag Archives: artificial intelligence

Superintelligence

I’m reading (listening to, actually) Superintelligence by Nick Bostrom and finding it unexpectedly very interesting. The book focuses on artificial intelligence, but early on he talks about possibilities for enhancing biological intelligence using current or near-future technology. Here’s an online paper where he talks about the same concept. Just using current in vitro fertilization technology, which creates about 10 embryos at a time, you could theoretically boost IQ by about 11 points if you pick just the smartest of the 10. (This is just a thought experiment so let’s not worry about the other 9. Of course, some otherwise reasonable people are going to have an ethical problem with this.) Do that 10 times in a row and you could theoretically boost IQ by 100 points or more. Einstein had an IQ of about 160. So you can produce a race of super-Einstein’s using current technology. Now, it would take 250 years to do this, and you would have to get everybody to do it, both to make a real impact at the societal level, and to avoid disturbing implications of those left behind.

Using a likely near-future technology called iterative embryo selection, you can theoretically extract the DNA from one or more embryos, move it to sperm and egg cells, combine them again to make a new set of embryos, and do it again. This might take a few years or months to go the 10 generations of 1-in-10, rather than 250 years. Now it’s potentially something big.

I’m a bit worried about super-villains. I don’t see any reason to think twice-as-smart humans will automatically be twice as ethical or twice as empathetic, and it might only take one really bad apple to ruin whatever utopia our newly brilliant problem solving selves come up with.

Like I said, the book is really about artificial intelligence. He believes that even humans enhanced to have, say, double the current average IQ will eventually be far outclassed by machines. It is not going to take 250 years for that to happen, so creating smarter humans in 250 years won’t make a lot of sense. If we create smarter humans in the short term, he thinks they will just use their smarts to make smarter machines even sooner.

This is just scratching the surface. It’s really a fascinating book, and somewhat like when I first read The Singularity is Near, I kind of feel like I am being let in on secrets that nobody else around me knows.

Guaranteeing AI Robustness against Deception

DARPA is funding research into defense against AI attacks.

The growing sophistication and ubiquity of ML components in advanced systems dramatically increases capabilities, but as a byproduct, increases opportunities for new, potentially unidentified vulnerabilities. The acceleration in ML attack capabilities has promoted an arms race: As defenses are developed to address new attack strategies and vulnerabilities, improved attack methodologies capable of bypassing the defense algorithms are created. The field now appears increasingly pessimistic, sensing that developing effective ML defenses may prove significantly more difficult than designing new attacks, leaving advanced systems vulnerable and exposed. Further, the lack of a comprehensive theoretical understanding of ML vulnerabilities in the “Adversarial Examples” field leaves significant exploitable blind spots in advanced systems and limits efforts to develop effective defenses.

smart home gadgets

This article from MIT says this year’s CES (Consumer Electronics…what? anyway it’s the big annual consumer electronics expo in Las Vegas) was all about voice-controlled home gadgets. I’m excited about certain things. I like the idea of a video doorbell and smoke detectors that can text me if they sense something while I am not home, for example.

The main concern people seem to have is privacy. While that concerns me a little, I am more concerned about repair and replacement of all this stuff as it wears out. There are a lot of little things broken in my house right now. I know how to fix some of them. I know who to call on others, although it can be a royal pain to get them to actually show up, diagnose the problem, and then follow up until it fixed. My “home warranty”, which is really just a contract with a network of independent repair people, helps a little to get them to be responsive and follow up, but dealing with them is still a pain. While I am fixing things, other things are going to break. Things like keyholes and doorbells last at least 20 years with minor repairs, and traditional dumb appliances last 10 or so years with minor repairs. It is better to just accept that things break at a certain rate and deal with it than to get frustrated, although they do seem to have a tendency to break all at once when I have other challenges going on in my family and work life. I know there is absolutely nothing special about me and everyone deals with the same issues, except the super rich who can just pay someone else to worry about it.

Now, when we add all the smart technology to the dumb things in our home, they are still going to have most of the same dumb problems they have always had, requiring occasional minor repairs and replacement after 10-20 years. But the added technology is going to need some combination of physical repairs and tech support frequently, and it will probably need to be replaced every 2 or 3 years like any other electronic device. Except now it will be 100 little things breaking and needing replacement in your house. Probably, you will need some kind of service contract for someone to come check and fix things at least monthly.

AI and socialism

AI and algorithms are being used to target social services more precisely in Denmark. This article finds the level of data being collected on individuals insidious. I think it could be in the wrong hands, but this sort of thing seems to work in the Scandinavian countries where people actually trust their governments and neighbors. In the U.S., we tend to take paranoia to extremes, for example refusing to have a national ID card when it could solve some of our voter registration issues.

The municipality of Gladsaxe in Copenhagen, for example, has quietly been experimenting with a system that would use algorithms to identify children at risk of abuse, allowing authorities to target the flagged families for early intervention that could ultimately result in forced removals.

The children would be targeted based on specially designed algorithms tasked with crunching the information already gathered by the Danish government and linked to the personal identification number that is assigned to all Danes at birth. This information includes health records, employment information, and much more.

From the Danish government’s perspective, the child-welfare algorithm proposal is merely an extension of the systems it already has in place to detect social fraud and abuse. Benefits and entitlements covering millions of Danes have long been handled by a centralized agency (Udbetaling Danmark), and based on the vast amounts of personal data gathered and processed by this agency, algorithms create so-called puzzlement lists identifying suspicious patterns that may suggest fraud or abuse. These lists can then be acted on by the “control units” operated by many municipalities to investigate those suspected of receiving benefits to which they are not entitled. The data may include information on spouses and children, as well as information from financial institutions.

October 2018 in Review

Most frightening stories:

  • The Trump administration has slashed funding to help the U.S. prepare for the next pandemic.
  • I read more gloomy expert opinions on the stability and resilience of the global financial system.
  • A new depressing IPCC report came out. Basically, implementing the Paris agreement is too little, too late, and we are not even implementing it. There is at least some movement towards a carbon tax in the U.S. – a hopeful development, except that oil companies are in favor of it which makes it suspicious. There is a carbon tax initiative on the ballot in Washington State this November that the oil companies appear to be terrified of, so comparing the two could be instructive, and the industry strategy may be to get a weaker law at the federal level as protection against a patchwork of tough laws at the state and local levels.

Most hopeful stories:

  • There is no evidence that kids in U.S. private schools do any better than kids in U.S. public schools, once you control for family income. (Okay – I admit I put this in the hopeful column because I have kids in public school.)
  • Regenerative agriculture is an idea to sequester carbon by restoring soil and  protecting biodiversity on a global scale.
  • Applying nitrogen fixing bacteria to plants that do not naturally have them may be a viable way to reduce nitrogen fertilizer use and water pollution.

Most interesting stories, that were not particularly frightening or hopeful, or perhaps were a mixture of both:

  • New tech roundup: Artificial spider silk is an alternative to carbon fiber. Certain types of science, like drug and DNA experiments, can be largely automated. A “quantum internet” could mean essentially unbreakable encryption.
  • Modern monetary theory suggests governments might be able to print (okay, “create”) and spend a lot more money without serious repercussions. What I find odd about these discussions is they focus almost entirely on inflation and currency exchange values, while barely acknowledging that money has some relationship actual physical constraints. To me, it has always seemed that one function of the financial system is to start flashing warning lights and make us face the realities of how much we can do before we are all actually starving and freezing in the dark. It could be that we are in the midst of a long, slow slide in our ability to improve our physical quality of life, but instead of that manifesting itself as a long slow slide, it comes as a series of random shocks where one gets a little harder to recover from.
  • I read some interesting ideas on fair and unfair inequality. Conservative politicians encourage people not to make a distinction between alleviating poverty and the idea of making everybody equal. These are not the same thing at all because living just above the poverty line is no picnic and is not the same thing as being average. There is a strong moral case to be made that nobody “deserves” to live in poverty even if they have made some mistakes. And simply “creating jobs” in high-poverty areas sounds like a nice conservative alternative to handouts, except that there isn’t much evidence that it works.

applying the latest data science to intimate acts

WARNING: I never promised this would be a 100% family friendly blog. Nevertheless, it’s been awhile since I checked in on the latest sex robots. Well, this is a blog about the fabulous science fiction future, and we all need to face up to the fact that sex robots are likely to be a part of that.

Anyway, I offer this paper first of all because it’s funny, and second of all because even if it turns out to be a joke, the technology to analyze videos of thousands of hours of sexual acts, analyze them using the latest data science techniques, and program the results into a robot almost certainly all exist. Yesterday’s “phone sex” will almost certainly evolve into tomorrow’s virtual brothel. I think it is more or less harmless although it means the way real human beings interact with each other will continue to evolve. But that has been going on for a long time and there is no reason to think it should stop now.

 

MIT makes crazy AI on purpose

A group at MIT showed an AI algorithm really disturbing pictures and then asked it what it saw in some inkblots.

When a “normal” algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: “A group of birds sitting on top of a tree branch.”

Norman sees a man being electrocuted.

And where “normal” AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.

The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from “the dark corners of the net” would do to its world view.

The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit.

Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.

Personally, I’m afraid of the people who spend their time seeking out photos of people dying violently and posting them on Reddit. As for the algorithm, I suppose it could be trained to identify and block violent images like the ones it has been shown, or kiddie porn, or ads targeting minors, etc. Or in the wrong hands, it could be used to block political speech or repress certain people or groups. Or the highest bidding company could get to use it to repress competitors’ ads.

the French AI strategy

Other countries (than the United States) are developing strategies for how artificial intelligence will affect work, productivity, and growth in the near future.

France’s national strategy also reveals that Macron’s government is wrestling with how to ensure that AI supports inclusivity and diversity, and to make certain that its implementation is transparent. The French aren’t just theorizing; they’re taking action. France plans to invest 1.5 billion euros (almost $1.8 billion dollars) in the next five years in artificial intelligence research. The French are looking to create their own AI ecosystem, train the next generation of scientists and engineers, and make sure that their workforce is prepared for an automated future.

France isn’t alone. Last month, the European Union’s executive branch recommended its member states increase their public and private sector investment in AI. It also pledged billions in direct research spending. Meanwhile, China laid out its AI plan for global dominance last year, a plan that has also been backed up with massive investment. China’s goal is to lead the world in AI technology by 2030. Around the world, our global economic competitors are taking action on artificial intelligence.

It’s therefore striking that the United States doesn’t have a national artificial intelligence plan.

The fact that I don’t find it striking reflects my lowered expectations more than anything. We don’t really have a strategy for infrastructure or education either, for example.

quantum computers

There has been some progress on quantum computers.

Quantum computers, after decades of research, have nearly enough oomph to perform calculations beyond any other computer on Earth. Their killer app is usually said to be factoring large numbers, which are the key to modern encryption. That’s still another decade off, at least. But even today’s rudimentary quantum processors are uncannily matched to the needs of machine learning. They manipulate vast arrays of data in a single step, pick out subtle patterns that classical computers are blind to, and don’t choke on incomplete or uncertain data. “There is a natural combination between the intrinsic statistical nature of quantum computing … and machine learning,” said Johannes Otterbach, a physicist at Rigetti Computing, a quantum-computer company in Berkeley, California.

If anything, the pendulum has now swung to the other extreme. Google, Microsoft, IBM and other tech giants are pouring money into quantum machine learning, and a startup incubator at the University of Toronto is devoted to it. “‘Machine learning’ is becoming a buzzword,” said Jacob Biamonte, a quantum physicist at the Skolkovo Institute of Science and Technology in Moscow. “When you mix that with ‘quantum,’ it becomes a mega-buzzword.”