ift.tt/2ThgV4O

Drexler and the Oxford Future of Humanity Institute proposing that artificial intelligence is mainly emerging as cloud-based AI services and a 210-page paper analyzes how AI is developing today.

AI development is developing automation of many tasks and automation of AI research and development will enable acceleration of AI improvement.

Accelerated AI improvement would mean the emergence of asymptotically comprehensive, superintelligent-level AI services that—crucially—can include the service of developing new services, both narrow and broad, guided by concrete human goals and informed by strong models of human (dis)approval. The concept of comprehensive AI services (CAIS) provides a model of flexible, general intelligence in which agents are a class of service-providing products, rather than a natural or necessary engine of progress in themselves.

The concept and potential impacts of comprehensive AI services is analyzed in detail.

Safe AGI

Responsible development of AI technologies can provide an increasingly comprehensive range of superintelligent-level (SI-level) AI services—including the service of developing new services—and can thereby deliver the value of general-purpose AI while avoiding the risks associated with self-modifying AI agents.

Tasks for advanced AI include:
• Modeling human concerns
• Interpreting human requests
• Suggesting implementations
• Requesting clarifications
• Developing and testing systems
• Monitoring deployed systems
• Assessing feedback from users
• Upgrading and testing systems

Analysis of Current Trend to Superintelligence

There is a chapter of analysis that show superintelligence will definitely emerge. 1 PFLOP per second machines can equal or exceed the human brain in raw computation capacity for specific tasks. 1 Petaflop per second machines already exist but will

Human beings require months to years to learn to recognize objects, to recognize and transcribe speech, and to learn vocabulary and translate languages. Given abundant data and 1 PFLOP/s of processing power, the deep learning systems referenced above could be trained in hours (image and speech recognition, ~10 exaFLOPs) to weeks (translation, ~1000 exaFLOPs). These training times are short by human standards, which suggests that future learning algorithms running on 1 PFLOP/s systems could rapidly learn task domains of substantial scope. A recent systematic study shows that the scale of efficient parallelism in DNN training increases as tasks grow more complex, suggesting that training times could remain moderate even as product capabilities increase.

Substantially superhuman computational capacity will accompany the eventual emergence of a software with broad functional competencies. All relevant future scenarios need to include the emergence of increasing superintelligence.

Super General Intelligence Can Be Created From Many Narrower AI Services

The article proposes the strategy of achieving general AI capabilities by tiling task-space with AI services.

It is natural to think of services as populating task spaces in which similar services are neighbors and dissimilar services are distant, while broader services cover broader regions. This picture of services and task-spaces can be useful both as a conceptual model for thinking about broad AI competencies, and as a potential mechanism for implementing them.

New AI Systems Will Be Part of an Ecosystem of Peer AI Systems

AI systems will be instantiated together with diverse peer-level systems. We should expect that any particular AI system will be embedded in an extended AI R&D ecosystem having aggregate capabilities that exceed its own. Any particular AI architecture will be a piece of software that can be trained and run an indefinite number of times, providing multiple instantiations that serve a wide range of purposes.

Avoiding Super-AGI Domination

It is often taken for granted that unaligned superintelligent-level agents could amass great power and dominate the world by physical means, not necessarily to human advantage. Several considerations suggest that, with suitable preparation, this outcome could be avoided:
• Powerful SI-level capabilities can precede AGI agents.
• SI-level capabilities could be applied to strengthen defensive stability.
• Unopposed preparation enables strong defensive capabilities.
• Strong defensive capabilities can constrain problematic agents.

Applying SI-level capabilities to ensure strategic stability could enable us to coexist with SI-level agents that do not share our values. The present analysis outlines general prospects for an AI-stable world, but necessarily raises more questions than it can explore.

A well-prepared world, able to deploy extensive, superintelligent-level security resources, need not be vulnerable to subsequent takeover by superintelligent agents.

Superpowers must not be confused with supercapabilities

It is important to distinguish between strategically relevant capabilities far beyond those of contemporaneous, potentially superintelligent competitors (“superpowers”), and capabilities that are (merely) enormous by present standards (“supercapabilities”). Supercapabilities are robust consequences of superintelligence, while superpowers—as defined—are consequences of supercapabilities in conjunction with a situation that may or may not arise: strategic dominance enabled by strongly asymmetric capabilities. In discussing AI strategy, we must take care not to confuse prospective technological capabilities with outcomes that are path-dependent and potentially subject to choice.

Nextbigfuture Application to Todays’s World of Google, Amazon and Facebook Dominance

It seems that an AI tool abundant world can be more resistant to Skynet AI. The analogy would be that citizens armed with guns and automatic guns would be able to protect themselves from any domestic or foreign military tyrant. This would seem to mean that there should be a policy of open-sourcing any AI capabilities that are more than some number of generations or some number of years from the commercial state of the art.

There also needs to be more accumulation of public domain data and public domain sensor data access. AI tools need to get more open and the data for training needs to be made more public.

This would apply to the current social media and search world. The dominance of Google in search, Facebook and Amazon needs to be tempered with some level of freedom to a reasonable public or DYI alternative.

Patents and copywrite provide a time limit for inventors and innovators to profit before everyone gets their share. AI systems and data (like the social graph) needs time or other limits for monopolization before making the capabilities public.

SOURCES – Eric Drexler and the Oxford Future of Humanity Institute

Written By Brian Wang. Nextbigfuture.com



via ift.tt/2Td5WcO ift.tt/2otOxOn