last night i awoke in your dream

(a surrealist-style poem popped into my head when i sat down at the computer today …i mean … why the fuck not…)


last night I awoke in your dream

ben goertzel

Last night I awoke in your dream

In the glow beneath your eyelids

Long yellow legs breasts o my heart

Fertile gaze of the nothingness

 

Last night I awoke in your dream

In the midst of a mad party

Pigs and chickens in Armani suits

whistling Beethoven

and you were there, naked,

skin shining with midsummer rain

and you called to me “Knowledge!

Science! Excellence!

 

“Serve me,” you said, “feel my

perfection. Deliver me bliss and

your existence. Show me joys past

description – or else – ”

 

I smiled at you willingly, ran like wild toward

your sweet flesh, and then

I was on a bare mountain road, surrounded

by hungry goats

 

“I love you,” I called, “please come back!”

 

The pigs in the suits returned, offering

me martinis

 

Last night I awoke in your dream

and I occupied your body

For a minute I was a beautiful young

woman – soft taut skin, curious energy,

no floppy old balls between my legs,

and wings on my back of course,

such glorious-colored feathers

 

Careening in the seas in your skull,

last night I passed through the nameless portal

the curve of your golden eyes embraced me

 

Last night I awoke in your dream,

and I saw you awoke in mine,

and we stared into each others’

eyes from behind each others’ eyelids

Deep Mind and Go and Hybrid AI Architectures

I like Gary Marcus’s article on Deep Mind and their recent awesome Go-playing AI software, which showed the ability to play at a top-notch level (not yet beating the world champion, but beating the European champion, which is a lot better than I can do….)

One good point Gary  makes in his article is that, while Deep Mind’s achievement is being trumpeted in the media as a triumph of “deep learning”, in fact  their approach to Go is based on integrating deep learning with other AI methods (game tree search) — i.e. it’s a hybrid approach, which tightly integrates two different AI algorithms in a context-appropriate way.

OpenCog is a more complicated, richer hybrid approach, which incorporates deep learning along with a lot of other stuff….. While Gary Marcus and I don’t agree on everything we do seem to agree that an integrated approach combining fairly heterogeneous learning algorithms/representations in a common framework is likely to do better than any one golden algorithm….

Almost no one doubts that deep learning is part of the story of human-like cognition (that’s been known since the 1960s actually)….  What Gary and I (among others) doubt is that deep learning is 100% or 80% or 50% of the story… probably according to my guess, it’s more like 15% of the story…

Go is a very tough game but in the end a strictly delimited domain.   To handle the everyday human world, which is massively more complex than Go in so many ways, will require a much more sophisticated hybrid architecture.  In OpenCog we have such an architecture.  How much progress Deep Mind is making toward such an architecture I don’t pretend to know, and their Go playing software — good as it is at what it does — doesn’t really give any hints in this regard.

OpenAI — quick thoughts

People keep asking me for comments about OpenAI.   Rather than pasting the same stuff into dozens of emails, I’ll just put my reply here…

(New links regarding OpenAI are appearing online frequently so I won’t try to link to the most interesting ones at the particular moment I’m writing this.  Use Google and investigate yourself if you wish 😉

Generally obviously OpenAI is a super-impressive initiative.   I mean —  a BILLION freakin’ dollars, for open-source AI, wow!!

So now we have an organization with a pile of money available and a mandate to support open-source AI, and a medium-term goal of AGI … and they seem fairly open-minded and flexible/adaptive about how to pursue their mandate, from what I can tell…

It seems their initial initiative is toward “typical 2015 style deep learning”, and that their board of advisors is initially strongly biased toward this particular flavor of AI.   So they are largely initially thinking about “big data / deep NN” type AI …   This should have some useful short-term consequences, such as probably the emergence of open source computer vision tools that are truly competitive with commercial systems.

However, it is worth noting that they are planning on spending their billion $$ over a period of 10 yrs or more.

So — Right now the OpenAI leadership is pumped about deep learning NN, in part because of recent successes with such algorithms by big companies.   But their perspective on AI is obviously broader than that.   if some other project — say, OpenCog — shows some exciting successes, for sure they will notice, and I would guess will be open to turning their staff in the direction of the successes — and potentially to funding external OSS teams that look exciting enough..

So, overall, from a general view obviously OpenAI is a Very Good Thing.

Open source and AI Safety

Also, I do find it heartening that the tech-industry gurus behind OpenAI have come to the realization that open-sourcing advanced AI is the best approach to maximizing practical “AI Safety.”    I haven’t always agreed with Elon Musk’s pronouncements on AI safety in the past, but I can respect that he has been seriously thinking through the issues, and this time I think he has come to the right conclusion…

I note that Joel Pitt and I wrote an article a few years ago, articulating the argument for open-source as the best practical path to AI safety.   Also, I recently wrote an essay pointing out the weaknesses in Nick Bostrom’s arguments for a secretive, closed, heavily-regulated approach to AGI development.   It seems the OpenAI founders basically agree and are putting their money where their mouth is.

OpenAI and OpenCog and other small OSS AI initiatives

Now, what about OpenAI and OpenCog, the open-source AGI project I co-founded in 2008 and have been helping nurse along ever since?

Well, these are very different animals.   First, OpenCog is aimed specifically and squarely at artificial General intelligence — so its mandate is narrower than that of OpenAI.   Secondly and most critically, as well as aiming to offer a platform to assist broadly with AGI development, OpenCog is centered on a specific cognitive architecture (which has been called CogPrime) created based on decades of thinking and prototyping regarding advanced AGI.

That is, OpenCog is focused on a particular design for a thinking machine, whereas OpenAI is something broader — an initiative aimed at doing all sorts of awesome AI R&D in the open source.

From a  purely OpenCog-centric point of view, the value of OpenAI would appear to be mainly: Something with a significant potential to smooth later phases of OpenCog development.

Right now OpenCog is in-my-biased-opinion-very-very-promising but still early-stage — it’s not very easy to use and (while there are some interesting back-end AI functionalities) we don’t have any great demos.   But let’s suppose we get beyond this point — as we’re pushing hard to do during the next year — and turn OpenCog into a system that’s a pleasure to work with, and does piles of transparently cool stuff.   If we get OpenCog to this stage — THEN at that point, it seems OpenAI would be a very plausible source to pile resources of multiple sorts into developing and applying and scaling-up OpenCog…

And of course, what holds for OpenCog also would hold for other early-stage non-commercial AI projects.   OpenAI, with a financial war-chest that is huge from an R&D perspective (though not so huge compared to say, a military budget or the cost of building a computer chip factory), holds out a potential path for any academic or OSS AI project to transition from the stage of “exciting demonstrated results” to the stage of “slick, scalable and big-time.”

Just as currently commercial AI startups can get acquired by Google or Facebook or IBM etc. — similarly, in future non-commercial AI projects may get boosted by involvement from OpenAI or other similar big-time OSS AI organizations.   The beauty of this avenue is, of course  that– unlike  in the case of acquisition of a startup by a megacorporation — OpenAI jumping on board some OSS project won’t destroy the ability of the project founders to continue to work on the project and communicate their work freely.

Looking back 20 years from now, the greatest value of the Linux OS may be seen to be its value as an EXEMPLAR for open-source development — showing the world that OSS can get real stuff done, and thus opening the door for AI and other advanced software, hardware and wetware technologies to develop in an OSS manner.

Anyway those are some of my first thoughts on OpenAI; I’ll be curious how things develop, and  may write something more once more stuff happens … interesting times yadda yadda!! …

From DARPA to Toyota… so quickly… (reflection on the entrepreneurial state)

I’m reading The Entrepreneurial State (on my new Sony Digital Paper, which btw is far and away is the most awesome e-reader created  as of 2015…), which makes a pretty strong case that gov’t research funding, rather than VCs and startups, has been the primary engine of tech innovation….

Self-driving cars are not discussed in the book but seemed to me an example of this — DARPA’s driving grand challenges seemed to rather quickly pave the way for Google and then pretty much every car company to jump into the self-driving cars arena.   All of a sudden self-driving cars are the new common sense rather than a niche techno-futurist idea.    But it was US gov’t investment that mediated this leap, getting the tech to the point where companies felt it was mature enough for them to jump in.

So beholding the recent DARPA robotics grand challenge, I wondered if it would have the same effect.   Would it spur companies to invest on follow-on technologies, taking up where the DARPA-funded entrants (largely universities, often with gov’t funding independent of DARPA) left off?

I didn’t have to wonder long though.   Yesterday I saw news that Toyota is putting USD $1 billion into US-based AI research labs — and that this AI effort will be run by Gil Pratt whose last job was at DARPA, running the robotics grand challenge.

The time-lag between the entrepreneurial state putting funding and focus on a certain idea or area, and big corporations taking over as R segues into D, grows shorter and shorter as the Singularity grows nearer…

(Whether we want the AGI revolution to be led by big companies is another question.  Obviously I’m pulling for a Linux-style open-source AGI, perhaps centered on OpenCog — AGI of, by and for the people … and the trans-people! ….  But that’s another story…)

 

Sociopaths flogging Aspergers …

The New York Times ran a pretty good article on the harsh, ruthlessly efficiency-driven work environment inside Amazon…


I take this as more evidence that the last gasp of the human-driven economy will consist of
  • A) scattered small groups of creative-minded maniacs spinning out new ideas
  • B) large, well-coordinated groups of overworked Aspergers people, ruled over and metaphorically flogged by overworked sociopaths [e.g. Amazon] —- dealing with the tricky bits of the scaling-up and rolling-out of these new ideas

After the robots and AIs have taken over all the other jobs, during an interim period, these may still be left — and then a few years or at most a couple decades later, they’ll be obsoleted too …

Hopefully, during this interim period, some of the ideas being implemented by the Aspergers/sociopath armies will involve distributing some resources to everyone else who has already been obsoleted from the economy as such…

Google DeepMind’s new Video Game AI

People are asking me today about the importance of Google DeepMind’s new video-game AI demo — see e.g. the Guardian article glowingly titled

Google develops computer program capable of learning tasks independently: ‘Agent’ hailed as first step towards true AI as it gets adept at playing 49 retro computer games and comes up with its own winning strategies

First off: Yeah, it’s certainly cool!!!

This is progress beyond DeepMind’s former Atari 2600 game demo, and I don’t want to pooh-pooh the amount of work and brilliance that goes into making something like this ACTUALLY WORK !! But this doesn’t especially feel to me like some sort of breakthrough, just good solid work in deep reinforcement learning… In particular, it doesn’t convince me that deep RL as DeepMind has habitually practiced it is an adequate approach to AGI….

The key issue is that these video games are very constrained domains, so that a deep reinforcement learning system can handle them without being able to interpret data in a broader context. This kind of constrained domain can obviously be handled by methods that wouldn’t work for broader, messier, more contextuality-rich real-world domains….

So to me “learning independently in a very constrained world” is not entirely the same problem as “learning independently in a big rich messy world” — the former can be done by a system that ignores a lot of stuff any real-world AI has to pay a lot of attention to…. The universe of simple video games they’ve dealt with in these fascinating experiments, are in the end still a very small and special domain for which a software system can be heavily tuned in various ways (i.e. even if the tuning is independent of which specific game is being learned, it can still be particular to the domain of “games of this type”).

Personally I think DeepMind would need to add a lot of extra stuff to their AGI approach as I understand it (at least, a lot of stuff beyond anything they’ve discussed publicly, and beyond anything hinted at by this demo), to approach AGI seriously. I think the kinds of deep learning and deep reinforcement learning they’ve focused on are only a moderate-sized fraction of the story of what’s needed for human-level cognition. To me, this demo doesn’t show that they have embraced the complexity and multi-aspect nature of what’s needed for human-level AGI. (Maybe they HAVE, behind the scenes, but this video doesn’t show it….)

Another way to put my point is: To really hit a home run their system will need to be able to learn to deal with new DOMAINS, not just new games within a domain for which their system has already been tuned and tweaked and human-configured….

One thing this demo does show is that, since their acquisition by Google, DeepMind is pushing ahead with AGI-oriented R&D according to their own tastes and directions, rather than simply applying their brainpower to Google’s search and ads products and so forth. That’s very nice to see 😉

New subscription page for AGI, Singularity email lists

In 2004 I started two email lists, one for AGI discussion and the other for Singularity discussion.   These are still rolling along, but the subscribe/unsubscribe page for them was on another website that went offline years ago.   I have finally put a new subscribe/unsubscribe page for the lists online, see http://wp.goertzel.org/email-lists/

The AGI list is co-administered and moderated by John Rose; the Singularity list is co-administered and moderated by Giulio Prisco and Amara Angelica from Kurzweilai.net

Mailing lists aren’t as important in the Internet ecosystem as they once were, given the explosion of social networks of various forms; but IMO they still fill a useful role.   Social network sites come and go but email has been around a while and probably isn’t going anywhere, though it may adopt various forms….

Ten Years to the Singularity If We Really Really Try (edited book release…)

Happy Newtonmas fellow humans…

I have gathered together some of my old (2008-2011) essays from H+ Magazine and elsewhere, into a book released via Humanity+ Press, titled Ten Years to the Singularity If We Really Really Try, and Other Essays on AGI and its Implications. See

Most of the essays have some new introductory  material, putting them in the context of stuff that’s happened between the time they were written and late 2014.   The future is rapidly unfolding —

BICA conference @ MIT, Nov 7-9

Hey all — I’ll be at MIT at the BICA-2014 conference from Nov 7-9 … if anyone is around MIT and wants to chat about AGI or other interesting issues, send me an email and then come by BICA and join me and my OpenCog chums for a lunch or dinner or whatever….

THE ELEVENTY-TWELVE LAWS OF MADNESS

At last I have made a truly major scientific discovery.  It came to me while I was sleeping last night.

Due to interacting with a large number of “not traditionally sane” people recently, I have managed to induce the precise mathematical laws governing their internal and external operations.

(Or, OK, maybe it’s just a poem of sorts – whatever — )

Anyhow, hereby I present to you … THE ELEVENTY-TWELVE LAWS OF MADNESS:

(feel free to suggest additions in the comments!)


  • FIRST LAW OF MADNESS: WAHAHAHAHAHAHRRRRGGGGGHHHH !!!
  • INVISIBLE LAW OF  OF MADNESS :
  • WORST LAW OF OF MADNESS: FUCK YOU!  FUCK YOU! FUCK YOU!
  • THIRD LAW OF MADNESS: FOR EVERY ACTION THERE IS AN EIGHTY-TWELVE TIMES OR MORE WILDLY EXAGGERATED AND OPPOSITE OR RANDOMLY DIRECTED REACTION
  • BEST LAW OF  OF MADNESS: aaaawwwwwww….
  • KURZWEILIAN/MAYAN LAW OF MADNESS ; 20XY
  • FUCKING LAW OF MADNESS; FUCK FUCK FUCK FUCK FUCK
  • FINNEGAN’S LAW OF MADNESS: BABABADALGHARAGHTAKAMMIN ARRONNKONNBRONNTONNERRONNTUONNTHUNNTROVARRHOUNAWNSKAWNTOOHOOHOORDENENTHURNUK
  • LOVELY LAW OF MADNESS: i love u, i love u, i love u 😉
  • OTHER LOVELY LAW OF MADNESS: I LOVE EVERYTHING SO MUCH, IT FUCKING OVERWHELMS ME AND MAKES ME SPLIT MY BOUNDARIES AND I DON’T KNOW WHAT TO DO
  • HORNY LAW  OF MADNESS: IT’S TIME TO EJACULATE THE WHOLE GODDAMNED COSMOS — YEEAAHHHH  !!!!
  • SOCIAL LAW OF MADNESS: WHY AM I SURROUNDED BY SO MANY TOTAL FUCKING ASSHOLES? WHY ARE THEY TRYING TO PROGRAM MY BRAIN WHY WHY WHY WHY WHY WHY WHY?   RRRRARWWGGGRRRR!!!
  • INFINITY LAW OF MADNESS: This page intentionally left blank.
  • ELEVENTY-TEENTH PERCEPTUAL LAW OF MADNESS: what i see is what u get, hahaha 8-D
  • ELEVENTY-TEENTH FUCKING SHIT PERCEPTUAL LAW OF CRAP MADNESS: WHAT YOU SEE IS WHAT I GET, MOTHERFUCKER, AND THAT’S WHY I HAVE TO EAT YOUR FLESH !!!
  • MATHEMATICAL LAW OF MADNESS: I could  prove that mathematically but i don’t have the time — time is an illusion anyway — and how can anyone “have” anything since none of us exist ? In fact (there are no facts) sooo —
  • CONFUSED LAW OF MADNESS: I just have no fucking clue about any of this … including myself (whatever that may be); I mean …
  • SECOND LAW OF MADNESS-DYNAMICS : The amount of madness in a closed, open, opzed, clopen or any other kind of system or non systematic entity or nonentity is always going to fucking increase, subject only to the nonexistent constraint that time doesn’t exist anyway —
  • LAW OF MAXIMUM MADNESS PRODUCTION; something about madness increasing along trajectories? someone will work it out eventually…
  • OM LAW OF MADNESS : some crazy lady says my clit is a transcendent machine for turning physical movement into cosmogonic WHHHAAAAA???!!!!!
  • SEVENTY-TENTH LAW OF MADNESS ; I WANT U, I WANT U, I WANT U !!  WELL ACTUALLY I WANT TO EAT YOUR FLESH !!!  OR MAYBE I DON’T, I DON’T KNOW —
  • ANTI-KORZYBSKIAN LAW OF MADNESS: THE MAP SURE IS THE FUCKING TERRITORY IF I FUCKING SAY IT IS, MOTHERFUCKER!!! DON’T TRY TO TELL ME IT ISN’T OR I’LL FUCKING SMASH YOUR FUCKING FACE !!!
  • NAMELESS LAW OF MADNESS: blah blah blah blah blah blah
  • QUANTUM LAW OF MADNESS: I can’t understand quantum mechanics and i can’t understand you, therefore you are quantum mechanics !!!
  • RELATIVISTIC LAW OF MADNESS: EVERYTHING IS RELATIVE, THEREFORE YOU ARE MY RELATIVE, THEREFORE ALL MY PROBLEMS ARE YOUR FAULT AND I NEED TO FUCKING KILL YOU !!!
  • JEWISH LAW OF MADNESS:  i know it’s all my fault somehow ;-(
  • MCKENNA’S LAW OF MADNESS: THEY REALLY ARE ALIENS!!! REALLY!!!!!
  • STONED LAW OF MADNESS: Whoa…..  I mean, like — whooaaa…..  I mean… —
  • SENILE LAW OF MADNESS: Uhhhh……
  • LITERARY LAW OF MADNESS : WORDS HAVE ABSOLUTELY NO MEANING, BUT I HAVE TO PRODUCE A FUCKING LOT OF THEM ANYWAY !!!
  • TYPOGRAPHICAL LAW OF MADNESS: ALWAYS WRITE IN ALL CAPS!!! JUST BECAUSE !!!
  • WONDERFUL LAW OF MADNESS: IT’S JUST SO FUCKING GREAT TO BE ALIVE !!!
  • CONTEXTUAL LAW OF MADNESS:
  • TRANSPARENT LAW OF MADNESS: I CAN SEE EVERYTHING SO FUCKING CLEARLY NOW!!  WHY CAN’T YOU UNDERSTAND??!!
  • PROPHETIC LAW OF MADNESS: I CAN SEE THE FUTURE AND YOU CAN’T !!!   WHY DON’T YOU BELIEVE ME, GODDAMNIT ?!?!
  • MANIC LAW OF MADNESS: WHY IS EVERYTHING GOING SO SLOWLY?
  • MANIC LAW OF MADNESS: WHY IS EVERYTHING GOING SO QUICKLY??
  • CRACKPOT’S LAW OF MADNESS:  I AM SO RIGHT!  THEY ARE SO WRONG!  AND THIS REALLY  MATTERS A LOT !!!!
  • HAMEROFF’S LAW OF MADNESS: MICROTUBULES! MICROTUBULES! MICROTUBULES!
  • PSYCHIC LAW  OF MADNESS; Trans-quantum morphic resonance voodoo, eight billion and twelve fuckingteen; objective reality, zero …
  • DEPRESSIVE LAW OF MADNESS: Everything is bad and annoying and upsetting , and whatever may happen, it certainly always will be….  There is no fucking hope at all; probably not for anything , definitely not for me …
  • BORING LAW  OF MADNESS: EVERYTHING IS SO FUCKING UTTERLY BORING!!! … ESPECIALLY MY STUPID FUCKING BORING MIND THAT THINKS EVERYTHING IS SO FUCKING BORING AND WANTS TO KEEP TYPING BORING THINGS IN ALL FUCKING CAPS
  • VOLATILE LAW  OF MADNESS: I LOVE YOU! I HATE YOU!  I LOVE YOU!  I HATE YOU!  I WAAAAAHHHHUUAAGGHHH!!!!
  • SUIDICAL LAW OF MADNESS:  … life is so fucking pointless and painful; i really have to kill myself; i can’t bear one more fucking minute of this — but killing myself seems really fucking annoying and painful too — fuck fuck fuck fuck fuck …
  • EMPTY LAW OF MADNESS: [         ]
  • FUCKING LAW OF MADNESS AGAIN: FUCK FUCK FUCK FUCK FUCK FUCK
  • BLISSFUL LAW OF MADNESS: 😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉  😉 😉 😉 😉
  • EXTRA LAW OF MADNESS

Older posts «