×
Cloud

Coalition Including Microsoft, Linux Foundation, GitHub Urge Green Software Development (bloombergquint.com) 136

"To help realize the possibility of carbon-free applications, Microsoft, the consultancies Accenture and ThoughtWorks, the Linux Foundation, and Microsoft-owned code-sharing site, GitHub, have launched The Green Software Foundation," reports ZDNet: Announced at Microsoft's Build 2021 developer conference, the foundation is trying to promote the idea of green software engineering - a new field that looks to make code more efficient and reduce carbon emitted from the hardware it's running on... The foundation wants to set standards, best practices and patterns for building green software; nurture the creation of trusted open-source and open-data projects and support academic research; and grow an international community of green software ambassadors. The goal is to help the Information and Communication Technology sector to reduce its greenhouse gas emissions by 45% before 2030.

That includes mobile network operators, ISPs, data centers, and all the laptops being snapped up during the pandemic. "We envision a future where carbon-free software is standard - where software development, deployment, and use contribute to the global climate solution without every developer having to be an expert," Erica Brescia, COO of GitHub said in a statement. Microsoft president Brad Smith said "the world confronts an urgent carbon problem."

"It will take all of us working together to create innovative solutions to drastically reduce emissions. Microsoft is joining with organizations who are serious about an environmentally sustainable future to drive adoption of green software development to help our customers and partners around the world reduce their carbon footprint."

VentureBeat also points out that Microsoft "recently launched a $1 billion Climate Innovation Fund to accelerate the global development of carbon reduction, capture, and removal technologies."

But Bloomberg explores the rationale behind the new foundation: Data centers now account for about 1% of global electricity demand, and that's forecast to rise to 3% to 8% in the next decade, the companies said in a statement Tuesday, timed to Microsoft's Build developers conference... While it's tough to determine exactly how much carbon is emitted by individual software programs, groups like the Green Software Foundation examine metrics such as how much electricity is needed, whether microprocessors are being used efficiently, and the carbon emitted in networking. The foundation plans to look at curricula and developing certifications that would give engineers expertise in this space. As with areas like data science and cybersecurity, there will be an opportunity for engineers to specialize in green software development, but everyone who builds software will need at least some background in it, said Jeff Sandquist, a Microsoft vice president for developer relations.

"This will be the responsibility of everybody on the development team, much like when we look at security, or performance or reliability," he said. "Building the application in a sustainable way is going to matter."

The Almighty Buck

Intelligent NFT Created Linked to a Machine-Learning Chatbot (decrypt.co) 22

Decrypt reports on the world's first "intelligent NFT" (or iNFT), being auctioned off in June as part of a collection of digital artworks at Sotheby's.

Her name is Alice: The brainchild of artist Ben Gentilli's Robert Alice studio and software developers Alethea AI, Alice is a non-fungible token (NFT), a blockchain-based token that can be used to prove ownership of a digital or physical asset. In this case, the asset in question is a machine-learning bot that uses a generative language model based on the OpenAI GPT-3 engine.

That means she's able to hold (somewhat stilted) conversations about life, the universe and everything... Since Alice "learns" from each audience interaction, drifting further from the original seed text, it becomes a decentralized manifesto. "It's fairly loose, because the audience can take it anywhere," Gentilli says. Alice has strong views on NFTs, as you might expect. "Non-fungible tokens are a way to liberate artists and give them the power of the blockchain," she tells me. But she's a little hazy on the details. Asked how, exactly, that would work, all she can come up with is, "I don't know. I am not an artist..."

So, is there an appetite for NFTs that talk back? Alethea CEO Arif Khan thinks so. "We're actually building a protocol that will allow you to take any NFT, put it into the smart contract infrastructure that we've built, and make it intelligent and interactive," he says. Your Beeple art piece or CryptoPunk could start talking back to you, he suggests. Or you could take your grandparent's diaries and use them as the seed text for a generative language bot. But do you want your CryptoPunk to talk to you? Chatbots already exist, and it's not clear why you'd need that bot to be attached to an NFT.

On the other hand, art can be a way to explore the implications of new technologies, Gentilli argues: "When you think about the whole trajectory of synthetic media, artists have been the people probably most known for experimenting with it at its rawest edge."

AI

AI Could Soon Write Code Based On Ordinary Language (wired.com) 57

An anonymous reader quotes a report from Wired: On Tuesday, Microsoft and OpenAI shared plans to bring GPT-3, one of the world's most advanced models for generating text, to programming based on natural language descriptions. This is the first commercial application of GPT-3 undertaken since Microsoft invested $1 billion in OpenAI last year and gained exclusive licensing rights to GPT-3. "If you can describe what you want to do in natural language, GPT-3 will generate a list of the most relevant formulas for you to choose from," said Microsoft CEO Satya Nadella in a keynote address at the company's Build developer conference. "The code writes itself."

Microsoft VP Charles Lamanna told WIRED the sophistication offered by GPT-3 can help people tackle complex challenges and empower people with little coding experience. GPT-3 will translate natural language into PowerFx, a fairly simple programming language similar to Excel commands that Microsoft introduced in March. Microsoft's new feature is based on a neural network architecture known as Transformer, used by big tech companies including Baidu, Google, Microsoft, Nvidia, and Salesforce to create large language models using text training data scraped from the web. These language models continually grow larger. The largest version of Google's BERT, a language model released in 2018, had 340 million parameters, a building block of neural networks. GPT-3, which was released one year ago, has 175 billion parameters. Such efforts have a long way to go, however. In one recent test, the best model succeeded only 14 percent of the time on introductory programming challenges compiled by a group of AI researchers. Still, researchers who conducted that study conclude that tests prove that "machine learning models are beginning to learn how to code."

AI

A Disturbing, Viral Twitter Thread Reveals How AI-Powered Insurance Can Go Wrong (vox.com) 49

An anonymous reader quotes a report from Vox: Lemonade, the fast-growing, machine learning-powered insurance app, put out a real lemon of a Twitter thread on Monday with a proud declaration that its AI analyzes videos of customers when determining if their claims are fraudulent. The company has been trying to explain itself and its business model -- and fend off serious accusations of bias, discrimination, and general creepiness -- ever since. [...] Over a series of seven tweets, Lemonade claimed that it gathers more than 1,600 "data points" about its users -- "100X more data than traditional insurance carriers," the company claimed. The thread didn't say what those data points are or how and when they're collected, simply that they produce "nuanced profiles" and "remarkably predictive insights" which help Lemonade determine, in apparently granular detail, its customers' "level of risk." Lemonade then provided an example of how its AI "carefully analyzes" videos that it asks customers making claims to send in "for signs of fraud," including "non-verbal cues." Traditional insurers are unable to use video this way, Lemonade said, crediting its AI for helping it improve its loss ratios: that is, taking in more in premiums than it had to pay out in claims. Lemonade used to pay out a lot more than it took in, which the company said was "friggin terrible." Now, the thread said, it takes in more than it pays out.

The Twitter thread made the rounds to a horrified and growing audience, drawing the requisite comparisons to the dystopian tech television series Black Mirror and prompting people to ask if their claims would be denied because of the color of their skin, or if Lemonade's claims bot, "AI Jim," decided that they looked like they were lying. What, many wondered, did Lemonade mean by "non-verbal cues?" Threats to cancel policies (and screenshot evidence from people who did cancel) mounted. By Wednesday, the company walked back its claims, deleting the thread and replacing it with a new Twitter thread and blog post. You know you've really messed up when your company's apology Twitter thread includes the word "phrenology." "The Twitter thread was poorly worded, and as you note, it alarmed people on Twitter and sparked a debate spreading falsehoods," a spokesperson for Lemonade told Recode. "Our users aren't treated differently based on their appearance, disability, or any other personal characteristic, and AI has not been and will not be used to auto-reject claims."

The company also maintains that it doesn't profit from denying claims and that it takes a flat fee from customer premiums and uses the rest to pay claims. Anything left over goes to charity (the company says it donated $1.13 million in 2020). But this model assumes that the customer is paying more in premiums than what they're asking for in claims. So, what's really going on here? According to Lemonade, the claim videos customers have to send are merely to let them explain their claims in their own words, and the "non-verbal cues" are facial recognition technology used to make sure one person isn't making claims under multiple identities. Any potential fraud, the company says, is flagged for a human to review and make the decision to accept or deny the claim. AI Jim doesn't deny claims. The blog post also didn't address -- nor did the company answer Recode's questions about -- how Lemonade's AI and its many data points are used in other parts of the insurance process, like determining premiums or if someone is too risky to insure at all.

Privacy

Clearview AI Hit With Sweeping Legal Complaints Over Controversial Face Scraping in Europe (theverge.com) 10

Privacy International (PI) and several other European privacy and digital rights organizations announced today that they've filed legal complaints against the controversial facial recognition company Clearview AI. From a report: The complaints filed in France, Austria, Greece, Italy, and the United Kingdom say that the company's method of documenting and collecting data -- including images of faces it automatically extracts from public websites -- violates European privacy laws. New York-based Clearview claims to have built "the largest known database of 3+ billion facial images."

PI, NYOB, Hermes Center for Transparency and Digital Human Rights, and Homo Digitalis all claim that Clearview's data collection goes beyond what the average user would expect when using services like Instagram, LinkedIn, or YouTube. "Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users," said PI legal officer Ioannis Kouvakas in a joint statement.

AI

Automation Puts a Premium on Decision-Making Jobs (axios.com) 59

A new paper shows that as automation has reduced the number of rote jobs, it has led to an increase in the proportion and value of occupations that involve decision-making. From a report: Automation and AI will shape the labor market, putting a premium -- at least for now -- on workers who can make decisions on the fly, while eroding the value of routine jobs. David Deming, a political economist at the Harvard Kennedy School, analyzed labor data over the past half-century and found that the share of all U.S. jobs requiring decision-making rose from 6% in 1960 to 34% in 2018, with nearly half the increase occurring since 2007.

Partially as a result, a greater share of wages is going to management and management-related occupations, more than doubling since 1960 to 32% -- a trend that is more pronounced in high-growth industries. This shift has also reinforced generational disparity in the labor market. Getting better at making decisions requires experience, and experience requires time on the job. Largely as a result, career earnings growth in the U.S. more than doubled between 1960 and 2017, and the age of peak earnings increased from the late 30s to the mid-50s.

AI

OpenAI's $100 Million Startup Fund Will Make 'Big Early Bets' With Microsoft As Partner 10

OpenAI is launching a $100 million startup fund, which it calls the OpenAI Startup Fund, through which it and its partners will invest in early-stage AI companies tackling major problems (and productivity). Among those partners and investors in the fund is Microsoft, at whose Build conference OpenAI founder Sam Altman announced the news. TechCrunch reports: In a prerecorded video, Altman explained that "this is not a typical corporate venture fund. We plan to make big early bets on a relatively small number of companies, probably not more than 10." It's not clear exactly how the $100 million will be divided or disbursed, or on what timeline, or whether this is part of a longer program. But it seems to be a limited fund, not just the 2021 round.

Altman did say that they will be looking for companies that are taking on serious issues, like healthcare, climate change and education, where AI-powered applications or approaches could "benefit all of humanity," in keeping with OpenAI's mission statement. But it would also consider productivity improvements as well, presumably like the GPT-3-powered natural language coding Microsoft showed off yesterday. Companies selected for funding will receive early access to new OpenAI systems and Azure resources from Microsoft, which hopefully would allow them to spring fully formed and ready to scale from the program. OpenAI would not elaborate on the equity agreement, expectations for startups, other partners or any further details. It's entirely possible that the $100 million figure is the only thing they've actually settled on.
AI

Synopsys Claims Chip Design Breakthrough With AI Engineering (forbes.com) 31

MojoKid writes: Mountain View, CA silicon design tools heavyweight Synopsys is claiming a breakthrough in chip design automation that it claims will usher in a new level of semiconductor innovation that will take the industry above and beyond the limits of Moore's Law (Gordon Moore's observation that the number of transistors in chips double roughly every two years), which is now considered by many to be plateauing. Synopsys' tool called DSO.ai is the world's first autonomous AI tool set for chip design. Synopsys claims its DSO.ai tool can dramatically accelerate, enhance, and reduce the costs involved with something called place-and-route. Just as it sounds, place-and-route (sometimes called floor planning) referrers to the placement of logic and IP blocks, and the routing of the traces and various interconnects in a chip designed to join them all together. Synopsys' DSO.ai optimizes and streamlines this process using the iterative nature of artificial intelligence and machine learning, such that what used to take dozens of engineers weeks or potentially months, now will take a junior engineer just days to complete. DSO.ai iterates on the floorplan and layout of a chip, and learns from each iteration, fine tuning and optimizing the chip within its design parameters and targets along the way. The old semiconductor paradigms are rapidly becoming a thing of the past. Today, it's about the best transistors, architectures, and accelerators for the job, and the human-constrained physical design engineering effort no longer has to be a gating factor.
Microsoft

Microsoft Uses GPT-3 To Let You Code in Natural Language (techcrunch.com) 37

Microsoft is now using OpenAI's massive GPT-3 natural language model in its no-code/low-code Power Apps service to translate spoken text into code in its recently announced Power Fx language. From a report: Now don't get carried away. You're not going to develop the next TikTok while only using natural language. Instead, what Microsoft is doing here is taking some of the low-code aspects of a tool like Power Apps and using AI to essentially turn those into no-code experiences, too. For now, the focus here is on Power Apps formulas, which despite the low-code nature of the service, is something you'll have to write sooner or later if you want to build an app of any sophistication.

"Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code," said Charles Lamanna, corporate vice president for Microsoft's low-code application platform. In practice, this looks like the citizen programmer writing "find products where the name starts with 'kids'" -- and Power Apps then rendering that as "Filter('BC Orders' Left('Product Name',4)="Kids")". Because Microsoft is an investor in OpenAI, it's no surprise the company chose its model to power this experience.

China

Huawei Founder Urges Shift To Software To Counter US Sanctions (reuters.com) 22

Founder of Chinese tech giant Huawei Technologies Ren Zhengfei has called on the company's staff to "dare to lead the world" in software as the company seeks growth beyond the hardware operations that U.S. sanctions have crippled. From a report: The internal memo seen by Reuters is the clearest evidence yet of the company's direction as it responds to the immense pressure sanctions have placed on the handset business that was at its core. Ren said in the memo the company was focusing on software because future development in the field is fundamentally "outside of U.S. control and we will have greater independence and autonomy." As it will be hard for Huawei to produce advanced hardware in the short term, it should focus on building software ecosystems, such as its HarmonyOS operating system, its cloud AI system Mindspore, and other IT products, the note said.
Businesses

Do You Own a Motorcycle Airbag if You Have to Pay Extra to Inflate It? (hackaday.com) 166

"Pardon me while I feed the meter on my critical safety device," quips a Hackaday article (shared by long-time Slashdot reader AmiMoJo): If you ride a motorcycle, you may have noticed that the cost of airbag vests has dropped. In one case, something very different is going on here. As reported by Motherboard, you can pick up a KLIM Ai-1 for $400 but the airbag built into it will not function until unlocked with an additional purchase, and a big one at that. So do you really own the vest for $400...?

The Klim airbag vest has two components that make it work. The vest itself is from Klim and costs $400 and arrives along with the airbag unit. But if you want it to actually detect an accident and inflate, you need load up a smartphone app and activate a small black box made by a different company: In&Motion. That requires your choice of another $400 payment or you can subscribe at $12 a month or $120 a year.

If you fail to renew, the vest is essentially worthless.

Hackaday notes it raises the question of what it means to own a piece of technology.

"Do you own your cable modem or cell phone if you aren't allowed to open it up? Do you own a piece of software that wants to call home periodically and won't let you stop it?"
AI

RAI's Certification Process Aims To Prevent AIs From Turning Into HALs (engadget.com) 71

An anonymous reader quotes a report from Engadget: [T]he Responsible Artificial Intelligence Institute (RAI) -- a non-profit developing governance tools to help usher in a new generation of trustworthy, safe, Responsible AIs -- hopes to offer a more standardized means of certifying that our next HAL won't murder the entire crew. In short they want to build "the world's first independent, accredited certification program of its kind." Think of the LEED green building certification system used in construction but with AI instead. Work towards this certification program began nearly half a decade ago alongside the founding of RAI itself, at the hands of Dr. Manoj Saxena, University of Texas Professor on Ethical AI Design, RAI Chairman and a man widely considered to be the "father" of IBM Watson, though his initial inspiration came even further back.

Certifications are awarded in four levels -- basic, silver, gold, and platinum (sorry, no bronze) -- based on the AI's scores along the five OECD principles of Responsible AI: interpretability/explainability, bias/fairness, accountability, robustness against unwanted hacking or manipulation, and data quality/privacy. The certification is administered via questionnaire and a scan of the AI system. Developers must score 60 points to reach the base certification, 70 points for silver and so on, up to 90 points-plus for platinum status. [Mark Rolston, founder and CCO of argodesign] notes that design analysis will play an outsized role in the certification process. "Any company that is trying to figure out whether their AI is going to be trustworthy needs to first understand how they're constructing that AI within their overall business," he said. "And that requires a level of design analysis, both on the technical front and in terms of how they're interfacing with their users, which is the domain of design."

RAI expects to find (and in some cases has already found) a number of willing entities from government, academia, enterprise corporations, or technology vendors for its services, though the two are remaining mum on specifics while the program is still in beta (until November 15th, at least). Saxena hopes that, like the LEED certification, RAI will eventually evolve into a universalized certification system for AI. He argues, it will help accelerate the development of future systems by eliminating much of the uncertainty and liability exposure today's developers -- and their harried compliance officers -- face while building public trust in the brand. "We're using standards from IEEE, we are looking at things that ISO is coming out with, we are looking at leading indicators from the European Union like GDPR, and now this recently announced algorithmic law," Saxena said. "We see ourselves as the 'do tank' that can operationalize those concepts and those think tank's work."

Google

Google Unit DeepMind Tried and Failed to Win AI Autonomy From Parent (wsj.com) 32

Senior managers at Google artificial-intelligence unit DeepMind have been negotiating for years with the parent company for more autonomy, seeking an independent legal structure for the sensitive research they do. From a report: DeepMind told staff late last month that Google called off those talks, WSJ reported Friday, citing people familiar with the matter. The end of the long-running negotiations, which hasn't previously been reported, is the latest example of how Google and other tech giants are trying to strengthen their control over the study and advancement of artificial intelligence. Earlier this month, Google unveiled plans to double the size of its team studying the ethics of artificial intelligence and to consolidate that research.

[...] DeepMind's founders had sought, among other ideas, a legal structure used by nonprofit groups, reasoning that the powerful artificial intelligence they were researching shouldn't be controlled by a single corporate entity, according to people familiar with those plans. On a video call last month with DeepMind staff, co-founder Demis Hassabis said the unit's effort to negotiate a more autonomous corporate structure was over, according to people familiar with the matter. He also said DeepMind's AI research and its application would be reviewed by an ethics board staffed mostly by senior Google executives.

Supercomputing

Google Plans To Build a Commercial Quantum Computer By 2029 (engadget.com) 56

Google developers are confident they can build a commercial-grade quantum computer by 2029. Engadget reports: Google CEO Sundar Pichai announced the plan during today's I/O stream, and in a blog post, quantum AI lead engineer Erik Lucero further outlined the company's goal to "build a useful, error-corrected quantum computer" within the decade. Executives also revealed Google's new campus in Santa Barbara, California, which is dedicated to quantum AI. The campus has Google's first quantum data center, hardware research laboratories, and the company's very own quantum processor chip fabrication facilities.

"As we look 10 years into the future, many of the greatest global challenges, from climate change to handling the next pandemic, demand a new kind of computing," Lucero said. "To build better batteries (to lighten the load on the power grid), or to create fertilizer to feed the world without creating 2 percent of global carbon emissions (as nitrogen fixation does today), or to create more targeted medicines (to stop the next pandemic before it starts), we need to understand and design molecules better. That means simulating nature accurately. But you can't simulate molecules very well using classical computers."

Microsoft

Microsoft Teams Launches For Friends and Family With Free All-Day Video Calling (theverge.com) 59

Microsoft is launching the personal version of Microsoft Teams today. After previewing the service nearly a year ago, Microsoft Teams is now available for free personal use amongst friends and families. From a report: The service itself is almost identical to the Microsoft Teams that businesses use, and it will allow people to chat, video call, and share calendars, locations, and files easily. Microsoft is also continuing to offer everyone free 24-hour video calls that it introduced in the preview version in November. You'll be able to meet up with up to 300 people in video calls that can last for 24 hours. Microsoft will eventually enforce limits of 60 minutes for group calls of up to 100 people after the pandemic, but keep 24 hours for 1:1 calls. While the preview initially launched on iOS and Android, Microsoft Teams for personal use now works across the web, mobile, and desktop apps. Microsoft is also allowing Teams personal users to enable its Together mode -- a feature that uses AI to segment your face and shoulders and place you together with other people in a virtual space. Skype got this same feature back in December.
AI

AI Tool Writes Real Estate Descriptions Without Ever Stepping Inside a Home (cnn.com) 32

A Canadian startup called Listing AI is using AI to quickly churn out computer-generated descriptions of real estate. All users need to do is give it some details about the home, and the AI does the rest. CNN reports: "L O V E L Y Oakland!" the house description began. It went on to give a slew of details about the 1,484 square-foot home -- light-filled, charming, Mediterranean-style, with a yard that "boasts lush front landscaping" -- and finished by describing the "cozy fireplace" and "rustic-chic" pressed tin ceiling in the living room. The results still need work: The real-life Oakland, California, home that fits with the above description (which my family is currently selling) actually has a pressed tin ceiling in the dining room, rather than the living room, for instance. The descriptions Listing AI created for me are not nearly as specific or well-written as the one crafted by our (human) realtor. And I had to provide the website with a lot of information about different rooms and features of the house and the outdoor landscaping -- a process that felt a bit like real-estate Mad Libs -- before the website was able to come up with several different descriptions.

But the general coherence of the descriptions that Listing AI proposed within seconds of my submission provides yet another sign that AI is getting better at a task that was traditionally seen as uniquely human -- and shows how people may be able to work with the technology, rather than fearing it may replace us. It probably won't do all the work of writing a house description for you, but according to Listing AI co-founder Mustafa Al-Hayali, that's not the point. He hopes it will complete about 80% to 90% of the work for coming up with a home description, which may be completed by a realtor or a copy writer. "I don't believe it's meant to replace a person when it comes to completing a task, but it's supposed to make their job a whole lot easier," Al-Hayali told CNN Business. "It can generate ideas you can use."
The information used in the app is processed by GPT-3, an AI model from nonprofit research company OpenAI. According to MIT Technology Review, GPT-3 could herald a new type of search engine.
Google

Language Models Like GPT-3 Could Herald a New Type of Search Engine (technologyreview.com) 13

An anonymous reader quotes a report from MIT Technology Review: In 1998 a couple of Stanford graduate students published a paper describing a new kind of search engine: "In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems." The key innovation was an algorithm called PageRank, which ranked search results by calculating how relevant they were to a user's query on the basis of their links to other pages on the web. On the back of PageRank, Google became the gateway to the internet, and Sergey Brin and Larry Page built one of the biggest companies in the world. Now a team of Google researchers has published a proposal for a radical redesign that throws out the ranking approach and replaces it with a single large AI language model, such as BERT or GPT-3 -- or a future version of them. The idea is that instead of searching for information in a vast list of web pages, users would ask questions and have a language model trained on those pages answer them directly. The approach could change not only how search engines work, but what they do -- and how we interact with them.

[Donald Metzler and his colleagues at Google Research] are interested in a search engine that behaves like a human expert. It should produce answers in natural language, synthesized from more than one document, and back up its answers with references to supporting evidence, as Wikipedia articles aim to do. Large language models get us part of the way there. Trained on most of the web and hundreds of books, GPT-3 draws information from multiple sources to answer questions in natural language. The problem is that it does not keep track of those sources and cannot provide evidence for its answers. There's no way to tell if GPT-3 is parroting trustworthy information or disinformation -- or simply spewing nonsense of its own making.

Metzler and his colleagues call language models dilettantes -- "They are perceived to know a lot but their knowledge is skin deep." The solution, they claim, is to build and train future BERTs and GPT-3s to retain records of where their words come from. No such models are yet able to do this, but it is possible in principle, and there is early work in that direction. There have been decades of progress on different areas of search, from answering queries to summarizing documents to structuring information, says Ziqi Zhang at the University of Sheffield, UK, who studies information retrieval on the web. But none of these technologies overhauled search because they each address specific problems and are not generalizable. The exciting premise of this paper is that large language models are able to do all these things at the same time, he says.

AI

GTA 5 Graphics Are Now Being Boosted By Advanced AI At Intel (gizmodo.com) 44

Researchers at Intel Labs have applied machine learning techniques to GTA 5 to make it look incredibly realistic. Gizmodo reports: [I]nstead of training a neural network on famous masterpieces, the researchers at Intel Labs relied on the Cityscapes Dataset, a collection of images of a German city's urban center captured by a car's built-in camera, for training. When a different artistic style is applied to footage using machine learning techniques, the results are often temporally unstable, which means that frame by frame there are weird artifacts jumping around, appearing and reappearing, that diminish how real the results look. With this new approach, the rendered effects exhibit none of those telltale artifacts, because in addition to processing the footage rendered by Grand Theft Auto V's game engine, the neural network also uses other rendered data the game's engine has access to, like the depth of objects in a scene, and information about how the lighting is being processed and rendered.

That's a gross simplification -- you can read a more in-depth explanation of the research here -- but the results are remarkably photorealistic. The surface of the road is smoothed out, highlights on vehicles look more pronounced, and the surrounding hills in several clips look more lush and alive with vegetation. What's even more impressive is that the researchers think, with the right hardware and further optimization, the gameplay footage could be enhanced by their convolutional network at "interactive rates" -- another way to say in real-time -- when baked into a video game's rendering engine.

AI

Voice Actor Reportedly Responsible For Amazon Alexa Revealed (theverge.com) 23

An anonymous reader quotes a report from The Verge: Amazon's Alexa has a voice familiar to millions: calm, warm, and measured. But like most synthetic speech, its tones have a human origin. There was someone whose voice had to be recorded, analyzed, and algorithmically reproduced to create Alexa as we know it now. Amazon has never revealed who this "original Alexa" is, but journalist Brad Stone says he tracked her down, and she is Nina Rolle, a voiceover artist based in Boulder, Colorado. The claim comes from Stone's upcoming book on the tech giant, Amazon Unbound, an excerpt of which is published here in Wired. Neither Amazon nor Rolle confirmed or denied Stone's reporting, which he says is based on conversations with the professional voiceover community, but Rolle's voice alone makes for a compelling case.

Here's how Stone writes up the process in selecting Alexa's voice: "Believing that the selection of the right voice for Alexa was critical, [then-Amazon exec Greg] Hart and colleagues spent months reviewing the recordings of various candidates that GM Voices produced for the project, and presented the top picks to Bezos. The Amazon team ranked the best ones, asked for additional samples, and finally made a choice. Bezos signed off on it. Characteristically secretive, Amazon has never revealed the name of the voice artist behind Alexa. I learned her identity after canvasing the professional voice-over community: Boulder, Colorado -- based voice actress and singer Nina Rolle. Her professional website contains links to old radio ads for products such as Mott's Apple Juice and the Volkswagen Passat -- and the warm timbre of Alexa's voice is unmistakable. Rolle said she wasn't allowed to talk to me when I reached her on the phone in February 2021. When I asked Amazon to speak with her, they declined."

Google

Google Plans To Double AI Ethics Research Staff (wsj.com) 49

Alphabet's Google plans to double the size of its team studying artificial-intelligence ethics in the coming years, as the company looks to strengthen a group that has had its credibility challenged by research controversies and personnel defections. From a report: Vice President of Engineering Marian Croak said at The Wall Street Journal's Future of Everything Festival that the hires will increase the size of the responsible AI team that she leads to 200 researchers. Additionally, she said that Alphabet Chief Executive Sundar Pichai has committed to boost the operating budget of a team tasked with evaluating code and product to avert harm, discrimination and other problems with AI. "Being responsible in the way that you develop and deploy AI technology is fundamental to the good of the business," Ms. Croak said. "It severely damages the brand if things aren't done in an ethical way." Google announced in February that Ms. Croak would lead the AI ethics group after it fired the division's co-head, Margaret Mitchell, for allegedly sharing internal documents with people outside the company. Ms. Mitchell's exit followed criticism of Google's suppression of research last year by a prominent member of the team, Timnit Gebru, who says she was fired because of studies critical of the company's approach to AI. Mr. Pichai pledged an investigation into the circumstances around Ms. Gebru's departure and said he would seek to restore trust.

Slashdot Top Deals