speechcontrolteam team mailing list archive
Mailing list archive
Fwd: Why this is secret...
Please read with an open mind.
---------- Forwarded message ----------
From: danteashton@xxxxxxxxx <danteashton@xxxxxxxxx>
Date: Fri, Jan 14, 2011 at 10:23 PM
Subject: Why this is secret...
To: Jacky Alcine <jackyalcine@xxxxxxxxx>, "K.de Jong" <undifined@xxxxxxxxx>
Some have expressed concern over the use of privacy over the Wintermute
project, this document will explain why this is necessary.
*Why Wintermute must be kept under lock-and-key.*
Wintermute's design is the result of almost a decade worth of work and
research; other methods have tried to produce a self-teaching chat-bot. They
have failed. A few partial successes met unfortunate ends because of their
closed-source nature. AI is ripe for picking from the open-source movement.
So why the secrecy? Simple. In the fields of the sciences (medicine
included) there is an amount of theft. In computer science the field of AI
is rife with it. (to the point that AI has had no major advance since the
early 90's, except for companies like Nuance, who mastered speech
recognition, or Google, who have got a better grip then anyone on Machine
Translation) It seems that the only way of making any such progress is to
work in a manner counter-intuitive to the open-source movement. There are
many, MANY dead F/OSS AI projects. Many such deaths have resulted from the
result of those interested/capable of assisting NOT assisting or, worse (and
far more common here then any other field) actively sabotaging the project.
In a field where money is virtually everything, AI requires far more money
and time and development then other engineering projects, this has led to a
significant amount of projects failing AND those who are successful making
it VERY hard for competitors to derail them, through legal or illegal means.
These people often have an interest in doing such that.
I admit, that is a low possibility. It is huge however, in comparison to
pressures from competing companies/developers over a music player.
As of now, the entire subject of AI in F/OSS is laughable at best; our major
office suite does not even have a grammar checker. At most, we have AI
engines for playing basic games, and a plugin for Banshee called Mirage. The
most advanced example is a decade-old OCR engine no one really develops
anymore. It's shameful that most AI we have is covered in the cake of dust
we call history. This project not only intends to dust them off, but to
I fully intend for the technologies/techniques created for the use and
creation by/of Wintermute to go to other projects (like a grammar checker).
I fully intend to make Wintemute F/OSS.
But only when it's ready.
For this to happen, Wintermute must be developed by people we trust.
Wintermute is not a media player, it's not a file-browser. It's more closely
related to the psychology of a *child* then it is related to a word
processor. It has more serious implications in the future then the
development of the Unity DE or transference to Wayland. Technologies and
techniques which have remained locked-down in a closed-source environment
have stayed there, unable to assist anyone, or the development of modern
This is wrong.
Wintermute's development must go ahead, but Wintermute itself must be
created and developed by those who can be trusted to do so. Artificial
Intelligence is known for industrial sabotage. Do I mind forks? No, I don't.
Except commercial forks. I love F/OSS. Do I intend to take advantage of it
(and you) for Wintermute? No, I don't. I'm not asking for a lot; all I
simply ask is that until the system is ready, we maintain a circle of trust
among those who can harm it. Call it secrecy, if you will. I call this
protecting a vulnerable system.
*Why Wintermute Must Be Protected.*
When Wintermute's systems are fully developed, the AI will be ready to
learn. This is when the system will be at it's most vulnerable. Another
team, *Wintermute Psychology, *will take the system into it's care.
The *wintermute-psych *team will be solely responsible for talking to the
system in it's development. You may find this, again, another example of
secrecy. It is not. Once a neural network is taught something, it is
impossible to 'un-teach' it. Backup mechanisms for it may need to be
constructed, though I fear the amount of data and consequently, the amount
of backups needed, will require an extremely large amount of storage, making
the most useful kind of backup impractical.
Even with backup mechanisms, a trusted, dedicated team will need to be
assembled. All input AND output will need to be examined with the mental
equivalent of a toothpick, trying to foresee how the system will interpret
the input. It must be checked, double checked, triple checked and quadruple
checked for spelling, grammar, continuity and semantic errors before being
given to the system.
The output, at least in the early stages, is expected to be vague and have
little degree of sense to it. Again, the team will need to analyse this in
extreme detail, looking at all possible interpretations.
The Wintermute-Psychology team will basically be acting as the first
(once-fictional) robopsychologists; dealing with teaching and understanding
the mind of a machine.
To this end, we will setup an Institution nearer the time; the Synthetic
Intellect Institute (SII: Rhymes with 'Psi', the symbol of the mental world)
As for now, those interested for Wintermute-Psych are welcome to apply.
So you see, Wintermute in it's developmental phase is open to attack. More
so in F/OSS then any other developmental methodology. In it's learning
phase, it's extremely vulnerable, where even a lowly misplaced comma could
destroy the entire system.
Vi Veri Veniversum Vivus Vici
Sent from Ubuntu