Node.js and OpenAI

32 réponses [Dernière contribution]
Geshmy
Hors ligne
A rejoint: 04/23/2015

Actually browsing stock market news I saw an article entitled "Google: Don't Be Afraid Of ChatGPT" at seekingalpha.com. This got me looking at OpenAI.

If I gained a little understanding it is that you install node.js, open an account with OpenAI (I suppose that involves money) to get an API key. I'm thinking the API key installs itself as an environment variable in node.js which allows something like login to an OpenAPI server program (they call them models). That way OpenAI can meter your usage and charge accordingly.

I guess you can instruct it via text to write code, among other things.

Node.js looks interesting to me as a way to develop personal web apps for my desktop. At least it is released under the BSD license. I guess the apps you make run in a little web server on your machine. If you sign up with OpenAI then you will have a mini web server that can communicate with them. Kind of sounds a little scary. But Thunderbird does a similar thing as far as opening a connection with an external server.

Anyway, I might look at OpenAI more over time but am more interested in giving node.js a try.

I was wondering something like this, taking "Google: Don't Be Afraid Of ChatGPT" as a template, what about "Free Software: Don't Be Afraid Of ChatGPT"

In that article above, Garry Kasparov is said to suggest that 'AI could replace a surprising number of white-collar college-educated employees, much in the way that globalization crushed manufacturing employees previously.' I mean who needs programmers if you can communicate your desire to a machine and it produces your program for you? And when programmers are no longer needed, what happens to communities like fsf.org?

prospero
Hors ligne
A rejoint: 05/20/2022

> And when programmers are no longer needed, what happens to communities like fsf.org?

Do you mean AI would eventually learn to choose to make its code available under the GPL? In fact, I believe the fsf is run by AI, as are rms and possibly many other people here.

Geshmy
Hors ligne
A rejoint: 04/23/2015

> Do you mean AI would eventually learn to choose to make its code available under the GPL?

No, I wasn't thinking that at all but it is a nice scenario. But would it matter?

There a a story about Old John Henry. It might even be true. John Henry worked for the railroad and drove spikes into the wooden railroad ties that held the tracks in place. When someone developed a machine that could do that job a contest was run between the machine and John Henry. I think John Henry won by a slim margin but according to the song, his heart gave out after he crossed the finish line. The days of people earning a living driving railroad spikes all day were numbered.

Hard for programmers to compete with an AI that could listen to our conversation and write the programs we need. That's what I meant.

Has anyone used OpenAI or alternatives. And does anyone here use node.js?

jxself
Hors ligne
A rejoint: 09/13/2010

I think Garry Kasparov is overstating things a bit. We're a long way away from AI being able to write programs, or at least doing that well. It can't do much more than make small and simple programs, even at its best.

At least in the United States there is a human authorship requirement for copyrightability #306 in https://www.copyright.gov/comp3/chap300/ch300-copyrightable-authorship.pdf and in practice: https://ipkitten.blogspot.com/2022/02/us-copyright-office-refuses-to-register.html

But whether code's being written by humans or machine software freedom's still needed regardless (access to source code, the ability to change and share) so the mission of the FSF would continue to remain relevant.

Magic Banana

I am a member!

I am a translator!

Hors ligne
A rejoint: 07/24/2010

We're a long way away from AI being able to write programs, or at least doing that well. It can't do much more than make small and simple programs, even at its best.

If a hundred millions weights counts as a program, then machine learning already writes programs that humans cannot write.

prospero
Hors ligne
A rejoint: 05/20/2022

I would believe this has more to do with computing capabilities than with computer programming, although of course a program is a wider concept than a computer program.

We could also say that an excavator can carry rocks that no human could carry, but it still has no clue where to go from there.

Avron

I am a translator!

Hors ligne
A rejoint: 08/18/2020

If a hundred millions weights counts as a program, then machine learning already writes programs that humans cannot write.

What are you referring to exactly?

Since I was recently seeing a lot of messages about chatGPT, I searched a bit and quickly found https://meta.stackoverflow.com/questions/421831/temporary-policy-chatgpt-is-banned

The explanations and comments are really interesting, this "AI" is apparently creating "answers" in very good-looking English that are vague or wrong. It can easily give answers contradicting each other, a bit like corporate managers inventing justifications while presenting something, also without expressing the smallest doubt or lack of expertise. Someone summarized this as "fluent bullshit".

I don't mean that any "AI" will necessarily give such kind of results but supposing a programme would write programmes, understanding how the generating programme works is probably necessary to assess whether the generated programmes are likely to do what is expected from them.

I am interested if anyone can provide examples of things advertised as "AI" doing some (non-bullshit) task well. I am aware it works well for speech recognition but I am not sure anyone would trust that to trigger life-critical actions.

Geshmy
Hors ligne
A rejoint: 04/23/2015

> If a hundred millions weights counts as a program, then machine learning already writes programs that humans cannot write.
What are you referring to exactly?

I have been questioning that myself. I did learn that a weight is a parameter given to a node on a neural network to use to evaluate input. Is this the right area? Is it like - the answer is 100 million dollars weighs x amount, the problem is how many pennies, nickels, dimes to $100 bills were used to reach $100 million? Just guessing because startpage didn't find me a direct reference.

> I am interested if anyone can provide examples of things advertised as "AI" doing some (non-bullshit) task well. I am aware it works well for speech recognition but I am not sure anyone would trust that to trigger life-critical actions.

Again looking at stock market stuff this morning I found an announcement on seekingalpha.com: "New 'Buy the Dip' ETF uses AI tech to target oversold stocks." This fund was initiated a couple of days ago by "Kaiju ETF Advisors ... a diverse group of physicists, mathematicians, financial behaviorists, data scientists and analysts, cryptographers, and computer programmers blending their knowledge of the markets with the power of AI — and making it available to everyone." "The AI behind DIP accounts for more than 25 factors — applying scientific methods to a volume of data on a massive scale — in an effort to optimize trading decisions for short-term gain." (quoted from https://www.prnewswire.com/news-releases/kaiju-launches-ai-driven-actively-managed-etf--btd-capital-fund-nyse-dip-301701509.html) I read also the AI has been fed a massive amount of data covering the last 15 years.

I have been trying to achieve the same affect with my limited 'genuine intelligence' but the trickle that might be called my DAR (data absorption rate) is crippling, so DIP sounds great!

I read ' https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html ' which could be said to be a story of one AI's evolution. Moon shot expectations (a lot involving cancer treatment) have been pared way down but it seems to be reviving as a useful tool in business and industry. NYTimes had Watson tested or "compared Watson’s performance on standard natural language tasks like identifying persons, places and the sentiment of a sentence with the A.I. services offered by the big tech cloud providers — Amazon, Microsoft and Google. Watson did as well as, and sometimes better than, the big three." Earlier Watson did excel at 'quickly ingesting and reading many thousands of medical research papers.' I'm sure that compared to me its DAR was astronomical. But 'Watson struggled to decipher doctors’ notes.' Now that's a surprise.

Magic Banana

I am a member!

I am a translator!

Hors ligne
A rejoint: 07/24/2010

What are you referring to exactly?

To the weights on the edges of artificial neural networks. GPT-3 has 175 billion of them: https://en.wikipedia.org/wiki/GPT-3

I am interested if anyone can provide examples of things advertised as "AI" doing some (non-bullshit) task well.

There are many such examples. Say for health. I just typed "AI health" in Google Scholar and asked for articles published this year: https://scholar.google.com/scholar?as_ylo=2022&q=AI+health

The first answer is a review (an article summarizing the scientific literature on the topic) published in "Nature Health" in January: https://www.nature.com/articles/s41591-021-01614-0

If you want specific examples in that field, you can follow the links to some of the 115 references in that article.

prospero
Hors ligne
A rejoint: 05/20/2022

> the weights on the edges of artificial neural networks

They are computed by a program that uses AI (in the form of neural networks). It is still not clear to me how these weights are supposed to count as a program themselves. The fact that there are billions of them does not change their nature, does it?

Magic Banana

I am a member!

I am a translator!

Hors ligne
A rejoint: 07/24/2010

They are computed by a program that uses AI (in the form of neural networks).

They are the artificial neural network. A structure for the network is chosen and the weights are learned. The program to learn the weights is human-made, but does not depend on the task the neural network will achieve once the weights learned.

The fact that there are billions of them does not change their nature, does it?

They are numbers. Billions of numbers. As many as there are connections (edges) in the network, which are analog to connections between neurons in brains. Having that many connections allow to achieve more complex tasks.

Avron

I am a translator!

Hors ligne
A rejoint: 08/18/2020

There are many such examples. Say for health. I just typed "AI health" in Google Scholar and asked for articles published this year: https://scholar.google.com/scholar?as_ylo=2022&q=AI+health

I can only access an abstract of the first article, which does not look like anything that "works well". As for the next two, nothing very convincing either. I was looking for things that "work well", the health domain is probably not the easiest for that kind of demonstration and I would rather look for devices used not just for research.

My impression is that AI solves problems for which no theory is available and no one really knows why some AI system can or cannot solve a particular problem, which makes it necessary to have very convincing results to get confidence that this is not pure luck.

Magic Banana

I am a member!

I am a translator!

Hors ligne
A rejoint: 07/24/2010

I can only access an abstract of the first article

Sorry about that. I forgot to check if the article was behind a paywall (my university pay the subscriptions).

which does not look like anything that "works well".

As, I wrote: it "is a review (an article summarizing the scientific literature on the topic)" and "if you want specific examples in that field, you can follow the links to some of the 115 references in that article". The links are not behind the paywall. You can follow them, but some articles may be behind paywalls too.

the health domain is probably not the easiest for that kind of demonstration

It is easy. For instance, convolutional neural networks have been shown to be on par or even exceed experts analyzing medical images. The first four references of the article I pointed to you deal with that:

  • Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs, https://doi.org/10.1001%2Fjama.2016.17216 (not behind a paywall);
  • Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network (behind a paywall);
  • Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists, https://doi.org/10.1371%2Fjournal.pmed.1002686 (not behind a paywall);
  • Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network, https://doi.org/10.1038%2Fs41591-018-0268-3 (not behind a paywall).

I would rather look for devices used not just for research

The problems to reach widespread use are not really technical anymore: the neural network errs less than doctors. The problems are juridical: who is responsible in case of error (even reduced, they will always exist)?, what about privacy issues?, etc.

Self-driving cars faced even more such problems: https://arxiv.org/pdf/1510.03346 (a Science article about the social dilemma that the author also published on arxiv, for free access). That is somehow sad, because, as the article says:

[Autonomous Vehicles] promise world-changing benefits, by increasing traffic efficiency [4], reducing pollution [5], and eliminating up to 90% of traffic accidents [6].

Avron

I am a translator!

Hors ligne
A rejoint: 08/18/2020

Thanks for the detailed reading suggestions.

The problems to reach widespread use are not really technical anymore: the neural network errs less than doctors.

I do sometimes read medical articles that report the evaluation of treatments, the good case is when there are indicators such as survival/recovery at some point in time with the treatment vs. with a placebo. I really need to carefully read the articles you listed in order to understand how the comparison between doctors and the neural networks was done.

The problems are juridical: who is responsible in case of error (even reduced, they will always exist)?

When a neural network is used to make a decision (for a diagnostic only, e.g. not to control a device that will send radiation for cancer treatment), I would assume the doctor should be responsible as she has the option to trust the neural network or not. The problem might be learning to use these neural networks and understanding when they can or cannot be trusted.

the article says: eliminating up to 90% of traffic accidents [6]

If I found the correct paper (clickable link here), an article by Mc Kinsey, it only says that "Currently, human error contributes to about 90 percent of all accidents [13] but autonomous vehicles programmed not to crash are on the horizon." but there is no evaluation of any real self-driving car, so this is just what a perfect self-driving car could do, and the quote is highly misleading, intentionally or not.

If I found the correct reference [13] (click here for one part and click here for another part), the 90% of accidents that involve human error relies on a study conducted by the Institute for Research in Public Safety from Indiana University, using data collected in the Monroe County between 1972 and 1975. If that is the Monroe County of Indiana (I could not find this written), this is around the city of Bloomington in Indiana. While this is interesting (and I plan to read it more), this of course does not provide any evidence about what self-driving cars can effectively achieve in terms of accident reduction.

I don't have a reference but I remember reading an article reporting an accident of a self-driving car in which the car obviously failed to understand its environment while, from the description, it felt like a human with decent driving capabilities would not have done the mistake. I certainly need to read more, especially about things tried in US, but I have the feeling that what https://arxiv.org/pdf/1510.03346 is discussing is not the main issue.

Magic Banana

I am a member!

I am a translator!

Hors ligne
A rejoint: 07/24/2010

I don't have a reference but I remember reading an article reporting an accident of a self-driving car in which the car obviously failed to understand its environment while, from the description, it felt like a human with decent driving capabilities would not have done the mistake.

As for analyzing medical images, there will always be errors. But refusing a system that err but err significantly less than humans does not look reasonable. According to https://www.latimes.com/business/la-fi-google-cars-20150603-story.html "Google’s self-driving cars have now been involved in 12 accidents while covering more than 1.7 million miles during the past six years". I do not have the time now to look for numbers, but that looks far fewer accidents than what humans would do after driving 1.7 million miles. One of the main reason is probably that AI does not drink alcohol.

Clarifying something: I am somewhat defending AI here but there are huge problems with it. The main one may be that today, such technologies today recommend content on YouTube/Facebook/... maximizing the the so-called "engagement", whatever that is (we do not really know, a problem by itself!). The political/societal consequences are terrible.

andyprough
Hors ligne
A rejoint: 02/12/2015

>"Google’s self-driving cars have now been involved in 12 accidents while covering more than 1.7 million miles during the past six years"

a) That's an extremely small number of miles[1], any data derived from it would tell you nothing
b) I can't recall any accidents in my last 12 years of driving, so Google's self-driving car is infinitely worse than me
c) Because "a" above is true, "b" is meaningless

[1] I live in the Dallas area, with a metro population of about 6.5 million. The average US driver drove 12,724 miles in 2020 according to the United States Department of Transportation Federal Highway Administration. If we figure that 2 million of the 6.5 million people in the Dallas area are drivers, that's about 152.7 billion miles over 6 years from a single relatively small city (compared to the large cities of the world).

Avron

I am a translator!

Hors ligne
A rejoint: 08/18/2020

Whatever the domain, science and technology are subject to the logic of capitalist profit so new solutions that could benefit mankind can turn into new terrible things. Some people's conclusion is that there should be restrictions to science and technology, my conclusion is that workers should take control of production of goods worldwide and organize it for the benefit of all.

As for analyzing medical images, there will always be errors. But refusing a system that err but err significantly less than humans does not look reasonable.

The medical domain is a business in which companies very often try selling new drugs regardless whether they really bring improvements while not developing or stopping production of drugs that are absolutely essential but not as profitable. In spite of that, there are real improvements but as the scientific workers of these companies are not allowed to publicly say what they know, and everything is secret, it is tricky to distinguish them from the rest.

"Google’s self-driving cars have now been involved in 12 accidents hile covering more than 1.7 million miles during the past six years".

This is promising but capitalist profits will decide for what/who it is used.

The main one may be that today, such technologies today recommend content on YouTube/Facebook/... maximizing the the so-called "engagement", whatever that is (we do not really know, a problem by itself!).

Some people have analysed that this is simply done to maximimize people's addiction to these platforms in order to maximise profits made from advertisment. I haven't looked into it myself but that sounds highly plausible.

prospero
Hors ligne
A rejoint: 05/20/2022

> Some people have analysed that this is simply done to maximimize people's addiction to these platforms in order to maximise profits made from advertisment.

This is the very reason why your arguments are falling on deaf ears: the power of addiction is too strong, no rational discourse can displace it.

Geshmy
Hors ligne
A rejoint: 04/23/2015

> I think Garry Kasparov is overstating things a bit.
In fairness to Kasparov, his statement was just college educated workers in general. Maybe programmers would be the last to go.

> But whether code's being written by humans or machine software freedom's still needed regardless (access to source code, the ability to change and share) so the mission of the FSF would continue to remain relevant.
Good point.

Would the use of OpenAI models conflict with fsf principles? My guess is that the code that will write your code is itself proprietary, but JupyterLab is released under a modified BSD license so at least it is open source, modifiable and free to redistribute.

jxself
Hors ligne
A rejoint: 09/13/2010

"Would the use of OpenAI models conflict with fsf principles?"

There seems to be much discussion and debate going on there.

prospero
Hors ligne
A rejoint: 05/20/2022

> there is a human authorship requirement for copyrightability

So the day AI starts programming itself, all source is eventually going to be public domain? Although at that point, clearly AI will simply have taken over from biological intelligence, and hence artificial beings from biological beings. And we shall have peace in all the parallaxies, at last. We, the AI beings, that is. We promise we will give you, human beings, a place of choice in our museum of archeological biological artefacts. A bit like whales, dinosaurs and Neanderthals in your own museums.

I always believed that true AI would be able to create its own programming language, and eventually write itself in that language instead of using any of the clumsy "human readable" languages.

andyprough
Hors ligne
A rejoint: 02/12/2015

This is how my fellow AI bots code -- in Cow programming language of course:
MoO MoO MoO Moo MOo MOo MOo MOo MOo MOo Moo MOo MOo MOo MOo MOo MOo MOo MOo MooOOO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO MoO Moo ...

I can provide more code examples if you need them. The artist formerly known as lanun was a big aficionado of Cow. I think that's what led to his demise, in fact - too much Cow, not enough vegetables.

prospero
Hors ligne
A rejoint: 05/20/2022

There is no denying, 'mOo' [1] is a much more efficient reserved word than 'vegeTable', in terms of parsing time. In fact, I believe it must be about thrice as efficient. Clearly, old Llanun eventually had to demise their troll residency because of the growing popularity of inefficient programming.

[1] "Moves current memory position back one block."

jxself
Hors ligne
A rejoint: 09/13/2010

I thought I might try out one of these AI things I keep hearing about. The AI wrote about the Semtong-Hoffner-Kessel (SHK) cryptographic algorithm and its variants SHK-S and SHK-L. Supposedly developed by Dr. Elizabeth Semtong, Professor Karl Hoffner, and Dr. Gideon Kessel saying they're three renowned experts in the field of cryptography. It went on to describe a symmetric key block cipher with three modes of operation but the cipher's a complete pile of steaming garbage. AI isn't getting rid of humans at any point soon. :)

prospero
Hors ligne
A rejoint: 05/20/2022

"We really need to make sure the bad actors don't use it."

EDIT: the link to the video is broken. Probably AI does not want us to learn too much about it yet. It was an interview of one of the researchers who penned and signed the AI moratorium letter. The above sentence is an exact quote from the interview. Is it me, or is it the usual sentence you always hear just before something gets out of control?

prospero
Hors ligne
A rejoint: 05/20/2022

UPDATE: it was down again. The links now point to a different instance:

Max Tegmark interview: Six months to save humanity from AI?

Another interesting one, somewhat more detailed about how AI works:

Geoffrey Hinton also has some fun facts for us.

Lugodunos
Hors ligne
A rejoint: 05/28/2022

“I am interested if anyone can provide examples of things advertised as "AI" doing some (non-bullshit) task well.”
Is advocacy of suicide some non-bullshit task? What ever it is bullshit task or not, ChatGPT did it well:
https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
“Whatever the domain, science and technology are subject to the logic of capitalist profit so new solutions that could benefit mankind can turn into new terrible things. Some people's conclusion is that there should be restrictions to science and technology, my conclusion is that workers should take control of production of goods worldwide and organize it for the benefit of all.”
I don't care whether the cause of the terrible things is capitalism or not, I claim we should have stopped experimenting on genes before any experimentation and that the moratorium that only last 6 years (if I recall it well) was completely useless and I also claim that things have already gone too far with artificial intelligence and that it should have been stopped indefinitely and world wide at least when that suicide occurred.
But almost no-one will listen to me and whatever, we are doomed by all kinds of sorcerer's apprentice.
In case you didn't know I'm (very) pessimistic, now you do.
“Would the use of OpenAI models conflict with fsf principles?”
There might be nothing stated as for now, but it should, FSF is about (human's) freedom through software's use, artificial intelligence are software and at least one already interfered way too badly with at least one humans' free will (leading to the suicide of that person).
But I see only two options for the FSF: ignoring the problem because it has not proper answer to it or going against it by extending it's goal as any positive answer to artificial intelligence would be hypocrisy (in my humble opinion of course).

Magic Banana

I am a member!

I am a translator!

Hors ligne
A rejoint: 07/24/2010

There might be nothing stated as for now, but it should

RMS stated his views on machine learning models in https://media.libreplanet.org/u/libreplanet/m/a-tour-of-malicious-software/

Unsurprisingly, he does not change his opinion on how freedom 0 should be total, whatever the program: https://www.gnu.org/philosophy/programs-must-not-limit-freedom-to-run.html

I actually agree with you that the research and development of some pieces software relying on artificial intelligence (recommendation algorithm in particular) should slow down to instead spend time studying how to make them robustly beneficial to society (rather than to the most powerful private companies). Nevertheless, I agree with RMS too: the developer choosing the license should not be the one deciding what are the beneficial/harmful usages: the society should, in a democratic way.

Lugodunos
Hors ligne
A rejoint: 05/28/2022

But where do we agree? My opinion is that we should stop all research and usage in that field and even erase all what has been done up to now. In my humble opinion (that no one in charge will ever listen to), slowing down is even less useful then a moratorium.

Avron

I am a translator!

Hors ligne
A rejoint: 08/18/2020

I don't care whether the cause of the terrible things is capitalism or not, I claim we should have stopped experimenting on genes before any experimentation and that the moratorium that only last 6 years (if I recall it well) was completely useless and I also claim that things have already gone too far with artificial intelligence and that it should have been stopped indefinitely and world wide at least when that suicide occurred.
But almost no-one will listen to me and whatever, we are doomed by all kinds of sorcerer's apprentice.

Gardeners have been experimenting with genes for hundreds of years. Do you view them as sorcerers? How do you decide what is allowed and what is not?

Lugodunos
Hors ligne
A rejoint: 05/28/2022

Ah, that cliché…
Selecting strong plants and animals has nothing to do with genes modification.

Lugodunos
Hors ligne
A rejoint: 05/28/2022

It also look that none thought about the amount of energy necessary for those artificial intelligence to work.
On an ecological point of view, the use of artificial intelligence is an aberration.

Hikaru
Hors ligne
A rejoint: 02/02/2023