Artificial Intelligence Project

23 respostas [Última entrada]
anonymous

I have become interested in the idea of artificial intelligence, strong AI. I have never developed software before. Even if I can find an algorithm which simulates a personality capable of learning from its inputs, a human language probably needs to be preprogrammed somehow. Another pressing matter is, how do I teach it solid ethics?

I hope the project never needs to be bothered with licensing, but if it must have a license, it will be either public domain, the latest version of the LGPL, or the latest version of the GPL. I am not sure which license is most proper.

Really, I am just wondering what people's opinions are on such a project. I assume people on the Trisquel forum are trustworthy people. Should I present this on YouTube, or is this something the public would not react to well?

CentaurX
Desconectado
Joined: 12/03/2013

I guess some people wouldn't accept it because of their religion. Appart from that... I guess it would be a good idea to present it on Youtube because there would be a person who can begin working on it.

I've been editing a rom of Pokemon (White 2) in which I have to deal a lot with AI parameters, and I've learnt that it's basically recognize patterns (You give the resources for the program to recognize the patterns) and then it makes a prediction on the next data it is suppoused to recieve based on the patterns given before.

I've also learn that extremely high levels of AI can produce overprediction over time where there is not something to predict. And sometimes very high levels of AI could make "the trainer" (in this case the player of the console) be frustrated because it predicts everything he's going to choose before time. Here's an article that helped me understand a bit about AI in pokemon games: http://www.kirsle.net/blog/kirsle/the-ai-in-pokemon-is-a-cheating-bastard

DonaldET3 (non verificado)

For religion, I am Christian. The AI needs to understand that human life is sacred and God created the universe in the first place.

I have read in several places that AI can be derived from pattern matching, also known as ZISC (Zero Instruciton Set Computer), but how does one perform mathematics with pure pattern matching? I cannot find information on that.

CentaurX
Desconectado
Joined: 12/03/2013

I said it because of fundamentalists-extrimists! XD, but anyway, that's for another forum!

Perhaps, much of the material is own by companies running proprietary software who doesn't want to share it!

lembas
Desconectado
Joined: 05/13/2010

>I hope the project never needs to be bothered with licensing, but if it must have a license, it will be either public domain, the latest version of the LGPL, or the latest version of the GPL. I am not sure which license is most proper.

If there's no license, then it's proprietary software. Public domain isn't a license but a lack of copyright. PD software is free software as long as the source code is available.

DonaldET3 (non verificado)

The problem is, strong AI is a potentially dangerous idea. I wonder if only extremely trustworthy people should be put in charge of such robots. The problem with that is, the only person many people will trust is themselves. Is there a solution to such a contradiction?

Michał Masłowski

I am a member!

I am a translator!

Desconectado
Joined: 05/15/2010

If someone can invent a dangerous thing, why wouldn't others do it too?
Many ideas in science were independently invented by different people.
There would be no problems with software patents if not for independent
inventions of the patented ideas that get used in software.

I know two solutions in dystopian sf literature: superhuman AI gets
developed once and destroys all other attempts (Friendship Is Optimal
has this), or unintelligent machines kill all humans preventing strong
AI from being made.

I know no solution for the danger of strong AI, while there are many
ways to protect against corporations that harm our environment, support
totalitarian states and that might be comparably dangerous in much
nearer future. Some involve the development of non-intelligent software
protecting our privacy and supporting censorship-resistant journalism,
others making our work possible without nonfree software controlled by
others.

CentaurX
Desconectado
Joined: 12/03/2013

If AI becomes "free" and someone creates such a threat, we could solve it by making a patch to solve the problem, and by that time, there would be such a huge amount of people working on it, that it wouldn't be a problem. I 'believe' that if people just trust themselves, they would definitely want to create a counter-measure with that problem to protect themselves and by doing this (obviously) they'll protect others from this threats. We will have a much higher standard of ethics that wouldn't allow too many people to create such huge threats.

Michał Masłowski

I am a member!

I am a translator!

Desconectado
Joined: 05/15/2010

> I have become interested in the idea of artificial intelligence,
> strong AI.

I'm interested in strong AI from a philosophical point of view, not
programming it. How would software freedom work with AI equivalent to
humans? The focus on user's control of computing would be slavery
unless we redefine the user.

> Even if I can find
> an algorithm which simulates a personality capable of learning from
> its inputs, a human language probably needs to be preprogrammed
> somehow.

For such a program to be useful, it needs to be derived from human
language texts. This is why I believe free culture to be necessary for
free software; a free AI program should know the essays of RMS, while
their license forbids it from being made. (The "preprogramming" of
language looks similar to the universal grammar concept.)

> I hope the project never needs to be bothered with licensing, but if
> it must have a license, it will be either public domain, the latest
> version of the LGPL, or the latest version of the GPL. I am not sure
> which license is most proper.

What effects on humanity will proprietary software vendors have
regarding this project? If it's beneficial to have proprietary software
use it, then it should be permissive (e.g. if nonfree programs use
Vorbis, then more music is distributed in unpatented formats, so can be
handled by free software even in the USA, is a similar situation
possible for this project?), if software can use it instead of other
existing non-copyleft software, then LGPL (I think this case doesn't
apply here), otherwise a strong copyleft license so restricting its
users' freedom is harder.

Public domain is jurisdiction-specific, you cannot release your software
into the public domain in e.g. non-UK EU countries, and doing it will
protect it less from software patents than using a license like Apache
2.0 or GPL3. Use CC0 if even a permissive free software license has too
strict requirements.

> Really, I am just wondering what people's opinions are on such a
> project. I assume people on the Trisquel forum are trustworthy
> people. Should I present this on YouTube, or is this something the
> public would not react to well?

It's not a new idea, many people write about it. (I wouldn't read it on
YouTube.)

DonaldET3 (non verificado)

I am starting to think that all of the legal stuff will just be a distraction for the project, and the ambitious idea of strong AI will probably require plenty of attention to be successful.

I know this is not a new idea, I have read books about it. I just felt like asking the Trisquel forum what its members thought.

onpon4
Desconectado
Joined: 05/30/2012

How is licensing distracting? You just plop the appropriate license text, and there you go.

I doubt any artificial intelligence in the near future will be sentient, though.

quantumgravity
Desconectado
Joined: 04/22/2013

a machine with artificial intelligence is not the same as a human being.
The complexity of our brains arises many other phenomena besides of intelligence, such as consciousness and feelings etc.
we're far away from modelling those.
We don't even understand them.
And eventually the fundamental difference between the way a human is "built" and the way our pcs are built may prevent us from creating an artificial living creature for a very, very long time, maybe forever.

CentaurX
Desconectado
Joined: 12/03/2013

We don't understand it because in order for you to create, let's say, a processor, you need to use much more "power" like, many "BRAINS" planning it! That's what happens. We need many people's brain power, JUST to create a Quad Core Processor. That's why we don't understand these issues. But, because we have science that makes some leaps to reach the knowledge easier, we're beginning to understand it. Moreover, I would say, that in around 200 years we would have created some AI humans based!

quantumgravity
Desconectado
Joined: 04/22/2013

"We don't understand it because in order for you to create, let's say, a processor, you need to use much more "power" like, many "BRAINS" "

No, it's definitly not the only issue.
There are fundamental differences between the brain and a computer.
This can cause for instance quantum mechanical effects that cannot occur in a computer:
http://en.wikipedia.org/wiki/Roger_Penrose#Physics_and_consciousness

If you really want to get into such things, then you'll be busy the next years.
You have to master a huge amount of physics *and* computer sciences until you can talk with those guys doing research on such a topic.

CentaurX
Desconectado
Joined: 12/03/2013

"No,
it's definitely not the only issue." I didn't say it was the only
issue. I just said what I had in my mind at that time.

And, by the way, for consciousness to exist, the machine or whatever it
is we are talking about, should be aware of itself. That's the
definition of consciousness after all...

You're argument that I cannot argue about this topic is called argument
from authority. Just because a person studies physics or computer
sciences as well as AI does not mean they have the ultimate answer. I
can argue about this topic, after all, it's a total free forum, AND I
have free speech, don't I?

I want also to point out that whether a so called 'expert' in the area
(AI) says something, doesn't change what is called the objective reality
until evidence is presented, regardless of how many years have this
person been studying this subject. His subjectivity doesn't change the
objective reality, neither his opinion, whether he/she truly believes it
or not.

In the end, I was JUST making an assumption. My opinion doesn't change
the reality, and can't.

<<<< And by the way, consciousness doesn't equate with Intelligence! XD >>>>>

quantumgravity
Desconectado
Joined: 04/22/2013

Since you refused my proposal of beginning intensive education so you can one day deal with the subject seriously, I think the only thing you want is giving some random thoughts about sci-fi stuff without any knowledge.

It's no argument-by-authority, since I'm no specialist for ai.
But thanks to my studies I know how complex things are and that they require real, hard work.
I never said you can't start getting involved. But after what I've heard, I doubt that you are really interested but just want to talk about fancy stuff which sounds cool and intelligent.
At least I get this feeling.

CentaurX
Desconectado
Joined: 12/03/2013

I'm just 17 years old. I study physics whenever I can (It's my favourite subject), I am hoping to get a scholarship abroad and I hope to study as much physics as well as computer science as I can (Here in my country I would be considered very skillful and perhaps very intelligent. Nonetheless the knowledge taught here it's much less that I would expect abroad, I unfortunately was born in a third world country... :/ But anyway, it's getting too personal).

That's not the field I will study in university, though. Perhaps, I will study something related to Bussiness or Economics (But this is irrelevant to the topic, I'm just writting this for giving you a perspective)

Perhaps you're right, it's just fancy stuff because I'm a teenager without being expert and without clear evidence that AI can be reached, but I HIGHLY disbelieve we cannot achieve it.

I didn't refuse your proposal of beginning intensive education, I just didn't answer to something that wasn't asked. It seems to me that you assume things I didn't say or thought! XD It's getting more like a debate...

DonaldET3 (non verificado)

Yes, the robot will need to have emotions. I only use the term "AI" because that is the term I typically hear when describing such ideas. I understand that I will never be able to make an algorithm which can accurately model intelligence or emotions. I think a better idea is to make a program which uses a crude approximation of general AI which can edit itself, using experience from human interaction, to hopefully develop into good, strong AI.

I am still lost over how to even start making a crude approximation. My nervousness over the project might stop me from ever beginning.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

As far as I know "strong AI" is science fiction. Nobody has a concrete plan on how to implement it. However, if some current approaches would lead to it, I would bet on the nature-inspired ones (multi-agent systems, artificial neural nets, etc.)... that we do not truly understand! As you said, learning probably requires not only many sensors (vision, etc.) to perceive the environment but also actuators (motion, etc.) to interact with it.

DonaldET3 (non verificado)

I am just a high schooler inspired by a science fiction novel.

I agree that if one approach is going to work, it will probably be the one which we least understand!

The novel which made me want to start an AI project is called "Hansell's Dragon" by Deborah K. Lauro. Am I insane?

:D

lembas
Desconectado
Joined: 05/13/2010

Scifi is a very special genre in that many actual new technologies have been brought to public awareness by scifi authors. Many technologies even use the very names suggested by scifi authors for the them.

E.g. cyberspace, robotics, cloning, space flight, laser...

Of course it's something of a https://en.wikipedia.org/wiki/Self-fulfilling_prophecy

Somebody has said what people can dream, they eventually can make. So, keep on dreaming (and making!).

CentaurX
Desconectado
Joined: 12/03/2013

We are Homo-sapiens, and if we are such beings generated by the process of Natural Selection, can you imagine how efficient can we be for creating beings like us? If nature could, why can't we? I mean, a long time ago, there was not any sort of consciousness in the earth. Now, we can say we do have consciousness, that makes us lucky... Many people argue that feelings and such social terms that they cannot define cannot be generated, not even explained. We just have to create a sort of chemical reactions within the machine and a receptor that sends the message to a sort of brain for it to "feel it" (it's more complicated, though, but not as many people want to argue)... So, yeah, it's highly likely we could generate AI soon (Perhaps not as I pointed out before, but less than what it took for nature)

G4JC
Desconectado
Joined: 03/11/2012

Any program can be abused. However, strong AI to me personally seems... like not a good idea. Take for example the great example in Tron. In man's attempt to create a perfect system he creates Clue! xD

Regardless there are some people working on them:
https://duckduckgo.com/html/?q=artificial+intelligence+opensource

Some of the simplest AI would be from an:
https://en.wikipedia.org/wiki/IRC_bot

I've dabbled slightly with them and made them carry on conversations but nothing else overly meaningful. And nothing that could compete with the infamous Wolfram Alpha's Sentient Code.

Magic Banana

I am a member!

I am a translator!

Desconectado
Joined: 07/24/2010

There is no "intelligence" in those bots (scary quotes to highlight that there is no consensual definition for that term). Their goal only is to talk (but not reason) like humans. And, yes, some of them, such as the proprietary Cleverbot, learn this skill from talking with humans (instead of being entirely programmed).