Cointelegraph talks with AI visionary Ben Goertzel, who shares with us his vision of the future of AI and computing, while also offering insights on how to guide an “artificial general intelligence” toward good, rather than evil.
The author and researcher in the field of artificial intelligence, Goertzel is chairman of the Artificial General Intelligence Society and the OpenCog Foundation, Vice Chairman of futurist nonprofit Humanity+, founder and CEO of SingularityNet, and chief scientist at Hanson Robotics. He has been working for years, along with a team of dozens researchers scattered around the globe, to create the world's first AI marketplace powered by Blockchain technologies.
From AI to AGI
BG: I began my career as a mathematics Ph.D. in the 1980s. I was an academic for awhile, then I entered industry in the late 90’s and I’ve been doing artificial intelligence applications in sort of every industry you can imagine from genetics, bioinformatics, and natural language processing. Some national security stuff with the US government, computer graphics, vision processing.
Six years ago I moved to Hong Kong and I began working with my friend David Hanson on application of AI to humanoid robotics. He has what’s the world’s most realistic humanoid robots, with beautiful facial expressions and emotional expression. He wanted the robots to be intelligent as well as looking good, and of course that’s a big research goal. We’re still working on it, but it’s a fascinating challenge.
That seemed like one route to realizing my main research goal in AI, which was really transitioning from narrow AIs or AIs that do highly specific tasks, to what I thought of as AGI or artificial general intelligence.
I coined that term in 2002 or 2003, and I’ve organized each year a conference on AGI, artificial general intelligence, and in the last decade we’ve seen the concept grow and flourish quite a lot, just as we’ve seen AI flourish in every different area.
Sophia robot
One of the things we realized in developing the Sophia robot and developing our AI technology, was that to take the next big leap in AI functionality, we want to build what we’ve been thinking of as a massive globally distributed AI mindcloud.
We want a decentralized network of AIs, each AI carrying out its own particular function, and different AIs in the decentralized network all communicating with each other and sharing data with each other and giving tasks to each other and doing work for each other.
Ben Goertzel speaks on AI and Blockchain in Barcelona
DAO of AIs
In order to build this decentralized network of AIs that share information with each other and ask each other to do things for each other, the Blockchain emerged as an appropriate platform.
So really we started out with wanting to build in essence a DAO of AIs, although we didn’t call it that initially because the phraseology of a DAO is just a few years old.
My first AI company I started in 1998, which lived for only three years from 1998 to 2001. That was called Webmind and it was based in New York. This was the first dot com boom. We were based in Silicon Alley in New York City, and what we wanted to build there was in essence what you would now call a DAO of AIs.
We wanted to build a network that would let people put up AIs anywhere on the planet. All these different AIs in the network would talk to each other and share information, and the collective intelligence of this whole network of AIs would exceed the intelligence by far of any one AI in the network. You need a lot of supporting technology to make that work.
Having a distributed ledger is very valuable because then the different AIs can keep track of what transactions have happened all around the network, without the need for a central controller.
Homomorphic encryption and related technologies are very valuable because some AIs have data they want to share with other AIs only in certain aspects and certain ways. The distributed ledger and the homomorphic encryption are sort of critical technologies for realizing this vision of a DAO of AIs.
One thing we realized recently was that introducing our own token could also be a valuable ingredient in the mix, because the different AIs in this DAO may be owned by different people, I mean ultimately they will be owned by themselves, and they want to exchange value along with exchanging data and requests for work. So then having a token that’s customized for the AIs to use to exchange value among each other can be valuable also.
You can then introduce different states to that token, and you can sort of customize with economic logic for the economy of AIs. So this is a perspective on what we are building with the SingularityNet, from my point of view as an AI developer. If you look at it from a business point of view, then it becomes different and in some ways simpler.
Because businesses all over the world now want to use machine learning as a service, and AI as a service, because only a few big tech companies can really afford to hire an army of AI developers themselves.
AI as a service
What most companies in the AI space want is to be able to use AI to perform certain tasks within their business’ operations, and they want to be able to request AI services from cloud providers.
That could be to figure out who to market a certain product to from among their customers, it could be to optimize their supply chain, it could be to detect fraud in their transaction database. Many, many different functions can be improved by AI now and so there’s an increasing set of providers of AI as a service.
You have big companies like IBM with BlueMix, and Amazon and Google Cloud offer AI APIs as part of users of AWS, or Google Cloud, but what big companies offer or what startups now offer in terms of AI as a service is expensive, and often requires awkward subscription plans where you have to buy into a large amount of services you might not need.
Also, the collection of AI functions offered commercially as a service is a small percentage of the AI that is out there as open source code in Github, so there’s a thousand times more AI function out there in open source code than there is wrapped up to offer as a service.
But most people can’t utilize all this because it’s a pain.
If someone has put some open source code in a Github repo, you download it, you try to get it to build on your Linux distribution. Then you go through the readme and figure out what it does, then you figure out how to connect it to your company’s IT system, and most people don’t have that kind of expertise.
From that point of view, what we can do with SingularityNet is we are creating a platform where a lot more AI tools can be wrapped up and then provided to any business who wants to use them, via our AI as a service API.
Sharing codes
You can look at it from the point of view of a customer and the AI developers. So, from the AI developers’ point of view, if you develop some funky kind of AI widget and you put it on Github, it’s not much work to put it in a Docker container, put it on a server, and wrap it in the SingularityNet API, which is very simple. Then your AI code sitting in that container can be found by our discovery mechanism because you’ve told our master node that it’s there, and then instead of just having your code in Github for geeks to download and work with, you put it online and wrap it in the API in a way that anyone can use it who finds it, by the SingularityNet discovery mechanism. Then you can get compensated in our token for people who have used your code and used its services.
CT: Is that the only incentive?
BG: Well, there’s a lot of incentives. Why do people put their code in Github right now? They just put it because they want to contribute it to the world, right?
CT: Yeah, right. So it’s rather moral incentives that work?
BG: It’s a combination right, because if you give people the ability to monetize their open source code that’s even better because look, what happens now many people will put their code online on Github just to contribute to the community.
On the other hand, many of them now also start AI startups wrapped around the code, and they fork the code and make it proprietary then their startup is bought up by a big company three years later, then that developer winds up an employee of a big company which may not have been the life plan they wanted.
But they made a startup, they got VC money and then acquisition exit strategy is sort of the norm now, right? So in a way the startup ecosystem serves as a recruiting mechanism to suck young guys who didn’t want to work for big companies into doing it after all. And then they may quit after a while and start a new company.
But something like the SingularityNet can provide a new way for people to monetize their AI without having to sell out to a big company. Because if you had a global decentralized network and you can put your AI there and anyone can access it and then use it in a way that connects it with other AI tools that are out there. That can provide a way for people to monetize what they’ve done, without having to go the currently standard route of creating a startup and selling the startup to Google or Amazon or something. Then the subtler aspect of it is it’s actually more than just sort of a cloud-based app store for AIs, because the different AI tools that are now sitting in Github are not configured to talk to each other and work together with each other.
Subnetworks and learning
Now when you build an application as an application developer, you’re connecting AIs together in this sort of bespoke way to work for your application. So for the Sophia robot, we’re using a lot of tools from opencv for computer vision processing, we’re using other people’s deep neural nets for recognizing faces and objects. We’re using Google Voice for speech processing. We’re using another company’s tool for text to speech. We’re using our own AI tools for memory and learning and personality, then we’re connecting dozens of different AI tools in the specific architecture to control the robot.
We can do that because we know fairly well what we’re doing. What we’d like to create, is a platform in which AI tools can connect with each other in an automated or at least semi-automated way where, for example, if you need a document summarized, as a user you can put a request into the SingularityNet saying “Hey, I need a document summarized.”
You may get bids from twenty different document summary nodes and you can look at the reputations that each one has, and you may choose one with the right balance of reputation and price, and then that node will provide you with a summary of documents that you feed to it.
But now that document summary node if it hits something in the document it can’t deal with, it can outsource that to another node. So suppose the document summary node that you’re paying to summarize your documents hits an embedded video. Wel,l it can outsource that to a video summarizing node and it can then pay it some fraction of the money it was paid. Or, if it sees a quote in Russian, maybe it doesn’t know Russian, it can outsource that on a microservices basis to a Russian to English translation node that can do that translation, then send it back to the document summary node.
So you can have the formation of federations or subnetworks of AI nodes, and there’s learning that can happen there, because if this document summary node learns that for a certain type of video it should go to this video analysis node, then that’s learning on the level of the connections between the two different AI nodes. In a way, that’s analogous to the learning that the brain does, where if you have two neurons, if they’re often useful together the connection between them is reinforced, which is long-term potentiation or hebbian learning, so in our case you have learning within each AI node.
If it’s a machine learning process, but you also have learning on the level of the connections between the AI nodes, that’s learning on the whole network level, which is interesting.
Now the Blockchain here is sort of part of the plumbing of the network, right, but that plumbing is valuable, because the lack of that sort of layer is part of what made it hard for us to build this sort of thing in the late 1990s when we were first trying.
There was no homomorphic encryption then, and if you wanted to make your own token for payment, there was no cryptographic mechanism to make that reasonably efficient. Having that plumbing there is valuable, just like having GPUs now is valuable for doing distributed vision processing. So now we have a whole assemblage of infrastructure technologies that make it possible to build this as a sort of upper layer on top of the infrastructure.
CT: Yeah, but the obvious question would be: couldn’t we end up with AI controlling the whole process?
BG: Well that’s the goal.
CT: For it’s own purpose that we don’t know, that we’re not aware of.
BG: Well, it’s hard to know what direction the evolution of AI is going to go. I think in the long term, which may just mean a few decades from now, I think AIs will have a much greater intelligence than human beings.
CT: So it sounds like a Sci Fi thing?
BG: I mean if we go in the science fiction direction, I think that in a few decades from now, humans will have two choices. One is just bring computer interfacing or mind uploading, merge your brain with the AI mind matrix, or two - just live happily in the people zoo with the other zoo animals.
I mean that’s logically speaking, those are the options that probably exist now. In the interim before we get there though, there’s a lot of interesting things that can happen.
I tend to think the odds of a good outcome are better if AI is developed in a more democratic way, so everyone can contribute and everyone can benefit.
I don’t like the dynamic I see where an AI is sucked up more and more into a few big governments and big corporations.
CT: Yeah, also, defense.
BG: Yeah, so you have defense, so AI is developed to kill people, AI is developed to spy on people. Then you have Google, AI is developed to brainwash people into buying stuff they don’t need, which is basically advertising. These are all parts of human nature but they’re not all there is to human nature. There are a lot of other applications for AI that get much less resources because the profitability aspect is more difficult.
Next destination of outsourcing
For example, I said our AI team is based all over the world, though I’m personally based mostly in Hong Kong and actually our biggest AI office is Ethiopia in Addis Ababa. We have 25 or 30 AI developers and a few dozen interns there, it’s a low cost development center. The universities there are pretty good, so we can hire good young graduates.
Africa will be the next destination for outsourcing I think, because Asia and Europe are already expensive. But spending time in Africa, you see so many needs for AI technology.
We’re developing an application there that, from an image of a leaf of a plant it identifies early stages of crop disease. We’re developing tools that help teach rural children who don’t have good education, so AI tutoring systems.
But this sort of beneficial AI application doesn’t get much money compared to killing, spying, or advertising because there’s not as much money in it.
However, if you have a more decentralized platform for AI development, then developers there can fully participate and users there can use tools in the AI Mindcloud in the SingularityNet. It doesn’t have to go through the profit center of a big company.
So you could have a developer in Uzbekistan upload an AI node doing machine learning, and then a user in Ethiopia could use that node to identify crop disease in a leaf. They may pay the developer in Uzbekistan something for that through the SingularityNet’s token exchange mechanisms, and that exchange can happen, whether or not there is enough profitability in that application for it to be interesting to IBM or Google.
By routing all AI through military and big government or big companies, what that means is the long tail of AI applications which don’t have that much profit associated with them, and the long tail of AI tools which may only be good for a certain niche of things, these are left out of the current way of doing things.
In the more decentralized approach, you can have more participation by developers and users all over the world, and I think that in essence will make it more likely that as AI gets smarter and smarter and smarter, things will go in a positive direction.
We can’t have any guarantee, but as messy as it is I would rather have the human race as a whole participating in the growth of AGI, than have it just be like the US and Chinese army and Google and Baidu or something, not that those are bad people. Google is run by good-hearted people but they have the one goal of maximizing shareholder value.
It just happens that by having this broader-based decentralized set of AI mechanisms, it happens that that’s also probably the best way to lead to the emergence of advanced levels of general intelligence.
You have a sort of political and benefit-oriented motive, and you have a research-oriented motive, and they seem to fit together. Because for both of those you want this sort of diverse and flexible decentralized breeding ground of intelligence.
CT: Have your tokens got a way to incentivize the right thing? Not the AI using it for its own purposes or someone to control the whole thing. What kind of mechanism do you use?
BG: We have a democratic governance mechanism built in where the token holders vote on matters that pertain to the dynamics of the whole network.
CT: Is there a mechanism to prevent anyone from controlling the bulk of it?
BG: Well democracy is risky in that sense. We do have a reserve of tokens which are reserved for beneficial uses, and then the decision of what’s a beneficial use is made democratically. I think in the end either you have a democracy or a dictatorship.
In the beginning it will be more of a dictatorship, because the founders own a lot of tokens. Just as Ethereum in a way is a benevolent dictatorship of Vitalik. In the beginning the founders own a lot of tokens, so the democratic mechanism in essence will be dominated by the founders and founding organizations, but as the system develops, the democracy will be more and more about whoever owns tokens gets to vote.