Musk, Woz, Hawking, and Robotics/AI Experts Urge Ban On Autonomous Weapons 262 262

An anonymous reader writes: An open letter published by the Future of Life Institute urges governments to ban offensive autonomous weaponry. The letter is signed by high profile leaders in the science community and tech industry, such as Elon Musk, Stephen Hawking, Steve Wozniak, Noam Chomsky, and Frank Wilczek. It's also signed — more importantly — by literally hundreds of expert researchers in robotics and AI. They say, "The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce."

A Programming Language For Self-Organizing Swarms of Drones 54 54

New submitter jumpjoe writes: Drones are becoming a staple of everyday news. Drone swarms are the natural extension of the drone concept for applications such as search and rescue, mapping, and agricultural and industrial monitoring. A new programming language, compiler, and virtual machine were recently introduced to specify the behaviour of an entire swarm with a single program. This programming language, called Buzz, allows for self-organizing behaviour to accomplish complex tasks with simple program. Details on the language and examples are available here. Full disclosure: I am one of the authors of the paper.

Which Movies Get Artificial Intelligence Right? 236 236

sciencehabit writes: Hollywood has been tackling Artificial Intelligence for decades, from Blade Runner to Ex Machina. But how realistic are these depictions? Science asked a panel of AI experts to weigh in on 10 major AI movies — what they get right, and what they get horribly wrong. It also ranks the movies from least to most realistic. Films getting low marks include Chappie, Blade Runner, and A.I.. High marks: Bicentennial Man, Her, and 2001: a Space Odyssey.

Taking the Lawyers Out of the Loop 116 116

An Associated Press story carried by the Christian Science Monitor suggests that expert systems can already replace lawyers in a great many disputes (especially low-level ones, where the disputants don't need or don't want to see each other), and the realm of legal expertise that can be embodied in silicon will only grow. The article spends most of its time on Modria, a company whose software is being used in Ohio to "resolve disputes over tax assessments and keep them out of court, and a New York-based arbitration association has deployed it to settle medical claims arising from certain types of car crashes," but mentions a few others as well. Modria's software has also been used to negotiate hundreds of divorces in the Netherlands, including ones with areas of dispute: "If they reach a resolution, they can print up divorce papers that are then reviewed by an attorney to make sure neither side is giving away too much before they are filed in court."

Google Applies For Patents That Touch On Fundamental AI Concepts 101 101

mikejuk writes: Google may have been wowing the web with its trippy images from neural networks but meanwhile it has just revealed that it has applied for at least six patents on fundamental neural network and AI [concepts]. This isn't good for academic research or for the development of AI by companies. The patents are on very specific things invented by Geoffrey Hinton's team like using drop out during training, or modifying data to provide additional training cases, but also include very general ideas such as classification itself. If Google was granted a patent on classification it would cover just about every method used for pattern recognition! You might make the charitable assumption that Google has just patented the ideas so that it can protect them — i.e. to stop other more evil companies from patenting them and extracting fees from open source implementations of machine learning libraries. Google has just started an AI arms race, and you can expect others to follow.

Computer Program Fixes Old Code Faster Than Expert Engineers 167 167

An anonymous reader writes: Less than two weeks after one group of MIT researchers unveiled a system capable of repairing software bugs automatically, a different group has demonstrated another system called Helium, which "revamps and fine-tunes code without ever needing the original source, in a matter of hours or even minutes." The process works like this: "The team started with a simple building block of programming that's nevertheless extremely difficult to analyze: binary code that has been stripped of debug symbols, which represents the only piece of code that is available for proprietary software such as Photoshop. ... With Helium, the researchers are able to lift these kernels from a stripped binary and restructure them as high-level representations that are readable in Halide, a CSAIL-designed programming language geared towards image-processing. ... From there, the Helium system then replaces the original bit-rotted components with the re-optimized ones. The net result: Helium can improve the performance of certain Photoshop filters by 75 percent, and the performance of less optimized programs such as Microsoft Windows' IrfanView by 400 to 500 percent." Their full academic paper (PDF) is available online.

An Organic Computer Using Four Wired-Together Rat Brains 190 190

Jason Koebler writes: The brains of four rats have been interconnected to create a "Brainet" capable of completing computational tasks better than any one of the rats would have been able to on its own. Explains Duke University's Dr. Miguel Nicolelis: "Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains."

Google's Driverless Cars Now Rolling In the Heart of Texas 114 114

MarkWhittington notes that, as reported by The Wall Street Journal, Google has started testing its self-driving cars in Austin. These driverless cars, loaded with the sensors, GPS transponders, and cameras, are now in service in "an area northeast and north of downtown Austin. The purpose of the test drives is to see if the car's software works in driving conditions outside of California and to develop a detailed map of Austin city streets. Each self-driving car has two human drivers ready to assume manual control if something goes wrong."

NVIDIA Hopes To Sell More Chips By Bringing AI Programming To the Masses 35 35

jfruh writes: Artificial intelligence typically requires heavy computing power, which can only help manufacturers of specialized chip manufacturers like NVIDIA. That's why the company is pushing its Digits software, which helps users design and experiment with neural networks. Version 2 of digits moves out of the command line and comes with a GUI interface in an attempt to move interest beyond the current academic market; it also makes programming for multichip configurations possible.

Dartmouth Contests Showcase Computer-Generated Creativity 50 50

An anonymous reader writes: A series of contests at Dartmouth College will pit humans versus machines. Both will produce literature, poetry and music which will then be judged by humans who will try and determine which selections were computer made. "Historically, often when we have advances in artificial intelligence, people will always say, 'Well, a computer couldn't paint a sunset,' or 'a computer couldn't write a beautiful love sonnet,' but could they? That's the question," said Dan Rockmore, director of the Neukom Institute for Computational Science at Dartmouth.

Machine Learning System Detects Emotions and Suicidal Behavior 38 38

An anonymous reader writes with word as reported by The Stack of a new machine learning technology under development at the Technion-Israel Institute of Technology "which can identify emotion in text messages and email, such as sarcasm, irony and even antisocial or suicidal thoughts." Computer science student Eden Saig, the system's creator, explains that in text and email messages, many of the non-verbal cues (like facial expression) that we use to interpret language are missing. His software applies semantic analysis to those online communications and tries to figure out their emotional import and context by looking for word patterns (not just more superficial markers like emoticons or explicit labels like "[sarcasm]"), and can theoretically identify clues of threatening or self-destructive behavior.

MIT System Fixes Software Bugs Without Access To Source Code 78 78

jan_jes writes: MIT researchers have presented a new system at the Association for Computing Machinery's Programming Language Design and Implementation conference that repairs software bugs by automatically importing functionality from other, more secure applications. According to MIT, "The system, dubbed CodePhage, doesn't require access to the source code of the applications. Instead, it analyzes the applications' execution and characterizes the types of security checks they perform. As a consequence, it can import checks from applications written in programming languages other than the one in which the program it's repairing was written."

Detecting Nudity With AI and OpenCV 172 172

mikejuk writes: AI gets put to some strange tasks. Not satisfied with the Turing test or inventing Skynet, Algorithmia have put together a nudity detector. Take one face detector from OpenCV and use it to find a nose. Take the skin color from the nose and then see what parts of the body are skin colored in the photo. If there is lot of skin color shout NUDE! Actually, the website lets you put in your own photos and classifies them into Rude or Good and gives you a confidence estimate. Obama with his top off — no problem but the familiar image processing test photo of Lena the pin up girl rates a 'Rude'.

WSJ Overstates the Case Of the Testy A.I. 230 230

mbeckman writes: According to a WSJ article titled "Artificial Intelligence machine gets testy with programmer," a Google computer program using a database of movie scripts supposedly "lashed out" at a human researcher who was repeatedly asking it to explain morality. After several apparent attempts to politely fend off the researcher, the AI ends the conversation with "I'm not in the mood for a philosophical debate." This, says the WSJ, illustrates how Google scientists are "teaching computers to mimic some of the ways a human brain works."

As any AI researcher can tell you, this is utter nonsense. Humans have no idea how the human, or any other brain, works, so we can hardly teach a machine how brains work. At best, Google is programming (not teaching) a computer to mimic the conversation of humans under highly constrained circumstances. And the methods used have nothing to do with true cognition.

AI hype to the public has gotten progressively more strident in recent years, misleading lay people into believing researchers are much further along than they really are — by orders of magnitude. I'd love to see legitimate A.I. researchers condemn this kind of hucksterism.

GA Tech Researchers Train Computer To Create New "Mario Brothers" Levels 27 27

An anonymous reader writes with a Georgia Institute of Technology report that researchers there have created a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game. The team tested their discovery, the first of its kind, with the original Super Mario Brothers, a well-known two-dimensional platformer game that will allow the new automatic-level designer to replicate results across similar games. Rather than the playable character himself, the Georgia Tech system focuses instead on the in-game terrain. "For example, pipes in the Mario games tend to stick out of the ground, so the system learns this and prevents any pipes from being flush with grassy surfaces. It also prevents "breaks" by using spatial analysis – e.g. no impossibly long jumps for the hero."

YouTube Algorithm Can Decide Your Channel URL Now Belongs To Someone Else 272 272

An anonymous reader writes: In 2005, blogger Matthew Lush registered "Lush" as his account on the then-nascent YouTube service, receiving as the URL for his channel. He went on to use this address on his marketing materials and merchandise. Now, YouTube has taken the URL and reassigned it to the Lush cosmetics brand. Google states that an algorithm determined the URL should belong to the cosmetics firm rather than its current owner, and insists that it is not possible to reverse the unrequested change. Although Lush cosmetics has the option of changing away from their newly-received URL and thereby freeing it up for Mr. Lush's use, they state that they have not decided whether they will. Google has offered to pay for some of Mr. Lush's marketing expenses as compensation.

NIST Workshop Explores Automated Tattoo Identification 71 71

chicksdaddy writes: Security Ledger reports on a recent NIST workshop dedicated to improving the art of automated tattoo identification. It used to be that the only place you'd commonly see tattoos was at your local VA hospital. No more. In the last 30 years, body art has gone mainstream. One in five adults in the U.S. has one. For law enforcement and forensics experts, this is a good thing; tattoos are a great way to identify both perpetrators and their victims. Given the number and variety of tattoos, though, how to describe and catalog them? Clearly this is an area where technology can help, but it's also one of those "fuzzy" problems that challenges the limits of artificial intelligence.

The National Institute of Standards and Technology (NIST) Tattoo Recognition Technology Challenge Workshop challenged industry and academia to work towards developing an automated image-based tattoo matching technology. Participating organizations in the challenge used a FBI -supplied dataset of thousands of images of tattoos from government databases. They were challenged to develop methods for identifying a tattoo in an image, identifying visually similar or related tattoos from different subjects; identifying the same tattoo image from the same subject over time; identifying a small region of interest that is contained in a larger image; and identifying a tattoo from a visually similar image like a sketch or scanned print.

Robot Swarm Behavior Suggests Forgetting May Be Important To Cultural Evolution 37 37

Hallie Siegel writes: Can we learn about human cultural evolution by studying how group behaviour in robots evolves? Researchers in the Artificial Culture Project are trying to do just that. Prof. Alan Winfield from the Bristol Robotics Lab discusses his latest research on modelling the process by which cultural memes develop in robots when they pass learned behaviours to other robots in their group. Some interesting findings suggest imitation noise (ie. when the behaviour isn't learned perfectly) and forgetfulness (i.e. when the robot has only limited memory of the behaviours it is trying to imitate) lead to stronger cultural memes in the robot behaviour.

Turning Neural Networks Upside Down Produces Psychedelic Visuals 75 75

cjellibebi writes: Neural networks that were designed to recognize images also hold some interesting capabilities for generating them. If you run them backwards, they turn out to be capable of enhancing existing images to resemble the images they were meant to try and recognize. The results are pretty trippy. A Google Research blog post explains the research in great detail. There are pictures, and even a video. The Guardian has a digested article for the less tech-savvy.