https://www.index.hr/vijesti/clanak/upozorenje-vrhunskog-fizicara-aiu-se-ne-smije-dati-tri-stvari-dali-smo-mu-sve-tri/2459184.aspx
Veštačka inteligencija
- Posts : 1049
Join date : 2012-02-11
- Post n°126
Re: Veštačka inteligencija
Interesantan tekst, jeste "index" ali ima par dobrih poenti
https://www.index.hr/vijesti/clanak/upozorenje-vrhunskog-fizicara-aiu-se-ne-smije-dati-tri-stvari-dali-smo-mu-sve-tri/2459184.aspx
https://www.index.hr/vijesti/clanak/upozorenje-vrhunskog-fizicara-aiu-se-ne-smije-dati-tri-stvari-dali-smo-mu-sve-tri/2459184.aspx
- Posts : 3803
Join date : 2020-09-27
Location : Waystone Inn
- Post n°127
Re: Veštačka inteligencija
Ajme ja sam ugljična šovinistica
_____
my goosebumps have goosebumps
- Posts : 41623
Join date : 2012-02-12
Location : wife privilege
- Post n°128
Re: Veštačka inteligencija
Не видим зашто бих уопште пристао да читам нешто што алгоритам премеље из већ виђеног.
_____
cousin for roasting the rakija
И кажем себи у сну, еј бре коњу па ти ни немаш озвучење, имаш оне две кутијице око монитора, видећеш кад се пробудиш...
- Posts : 8095
Join date : 2020-09-07
- Post n°129
Re: Veštačka inteligencija
Ne bi koristio summarize algoritam da ti sažme davež od teksta?
_____
Sweet and Tender Hooligan
- Posts : 41623
Join date : 2012-02-12
Location : wife privilege
- Post n°130
Re: Veštačka inteligencija
Notxor wrote:Ne bi koristio summarize algoritam da ti sažme davež od teksta?
Не знам колико бих му веровао. Више би ми значио онај што га је Асимов био замислио, да избацује фразе и завлачење и извлачи суштину, ама се бојим да би резултат почесто гласио „“.
_____
cousin for roasting the rakija
И кажем себи у сну, еј бре коњу па ти ни немаш озвучење, имаш оне две кутијице око монитора, видећеш кад се пробудиш...
- Posts : 3470
Join date : 2014-10-29
- Post n°131
Re: Veštačka inteligencija
https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey
_____
you cannot simply trust a language model when it tells you how it feels
- Posts : 3620
Join date : 2018-07-03
- Post n°132
Re: Veštačka inteligencija
Edinburg je prvi grad u Britaniji u kojem su u promet usli gradski autobusi bez vozaca. Autobusi su "nauceni" kako da predju iz jedne trake u drugu, prepoznaju svetla na semaforu i izadju iz kruznog toka. Za sada ce vozaci sedeti ispred volana, da intervenisu ako treba.
— Aleksandar Kocic (@AlexKocic) May 15, 2023
_____
"Sisaj kurac, Boomere. Spletkario si i nameštao ban pa se sad izvlačiš. Radiša je format a ti si mali iskompleksirani miš. Katastrofa za Burundi čoveče.
A i deluje da te napustio drugar u odsudnom trenutku pa te spašavaju ova tovarka što vrv ni ne dismr na ribu, to joj se gadi, i ovaj južnjak koji o niškim kafanama čita na forumu. Prejaka šarža." - Monsier K.
- Posts : 52531
Join date : 2017-11-16
- Post n°133
Re: Veštačka inteligencija
Sad jos samo autobusi i bez putnika, koji rade od kuce.
- Posts : 3470
Join date : 2014-10-29
- Post n°134
Re: Veštačka inteligencija
_____
you cannot simply trust a language model when it tells you how it feels
- Posts : 3470
Join date : 2014-10-29
- Post n°136
Re: Veštačka inteligencija
https://theconversation.com/ai-pioneer-geoffrey-hinton-says-ai-is-a-new-form-of-intelligence-unlike-our-own-have-we-been-getting-it-wrong-this-whole-time-204911
Jako dobar osvrt nas odnos prema AI.
Jako dobar osvrt nas odnos prema AI.
[size=43]The false equivalence trap[/size]
[size=40]On one hand, large language model-based tools such as ChatGPT produce text that’s very similar to what humans write. ChatGPT even makes stuff up, or “hallucinates”, which Hinton points out is something humans do as well. But we risk being reductive when we consider such similarities a basis for comparing AI intelligence with human intelligence.[/size]
[size=40]We can find a useful analogy in the invention of artificial flight. For thousands of years, humans tried to fly by imitating birds: flapping their arms with some contraption mimicking feathers. This didn’t work. Eventually, we realised fixed wings create uplift, using a different principle, and this heralded the invention of flight.[/size]
[size=40]Planes are no better or worse than birds; they are different. They do different things and face different risks.[/size]
[size=40]AI (and computation, for that matter) is a similar story. Large language models such as GPT-3 are comparable to human intelligence in many ways, but work differently. ChatGPT crunches vast swathes of text to predict the next word in a sentence. Humans take a different approach to forming sentences. Both are impressive.[/size]
_____
you cannot simply trust a language model when it tells you how it feels
- Posts : 52531
Join date : 2017-11-16
- Post n°137
Re: Veštačka inteligencija
Non-human intelligence (NHI) je bolji naziv od AI
- Posts : 8095
Join date : 2020-09-07
- Post n°138
Re: Veštačka inteligencija
Jok bre. Najveći problem je intelligence.
Reč o je statističkim modelima koji koriste baze postojećeg "znanja".
Stable diffusion - koji pretvara tekst u sliku bukvalno importuje bazu koja je povelika, oko gigabajt ako se dobro sećam.
Krajnji rezultat direktno zavisi od uvezenog "znanja" kao i od seeda tj. "bacanja kocke" koja je zadužena za "kreativnost".
Ponavljanjem seeda u upitu se dobijaju slični rezultati.
Reč o je statističkim modelima koji koriste baze postojećeg "znanja".
Stable diffusion - koji pretvara tekst u sliku bukvalno importuje bazu koja je povelika, oko gigabajt ako se dobro sećam.
Krajnji rezultat direktno zavisi od uvezenog "znanja" kao i od seeda tj. "bacanja kocke" koja je zadužena za "kreativnost".
Ponavljanjem seeda u upitu se dobijaju slični rezultati.
_____
Sweet and Tender Hooligan
- Posts : 3803
Join date : 2020-09-27
Location : Waystone Inn
- Post n°139
Re: Veštačka inteligencija
Naomi Klein o AI-u
_____
my goosebumps have goosebumps
- Posts : 8095
Join date : 2020-09-07
- Post n°140
Re: Veštačka inteligencija
pitanje sa reddita
odličan, 5
Hello, I am not Serbian and don't know a lot about Serbia but really love it. I have a question, why do most people in this subreddit write in the latin alphabet? I thought that you used the Cyrillic alphabet.
Sorry if this question is not appropriate but even Google couldn't answer me.
Answer from ChatGPT:
Hello,
Your question is perfectly appropriate and very welcome! The uniqueness of the Serbian language is that it can be written in two different alphabets, Cyrillic and Latin, interchangeably. Both scripts are considered official and are taught in schools.
While the Cyrillic script has a traditional significance and is often used in formal documentation, literature, and newspapers, in the realm of digital communication, the Latin script tends to be more popular. This trend can be attributed to a few reasons:
Technical Convenience: The Latin script is more universally recognized on the internet and digital devices. The Cyrillic script can occasionally run into compatibility issues with software or webpages that aren't designed to support it. Additionally, using Latin script makes the content more accessible to a broader, non-Serbian speaking audience.
Ease of Typing: Typing in Latin is usually more straightforward on most keyboards, especially on mobile devices or when using an international layout.
Remember, this doesn't reflect on a person's 'Serbian-ness' or their respect for the culture. It's more about convenience and ease in the digital world.
Hope this answers your question! Feel free to ask if you have any other queries.
odličan, 5
_____
Sweet and Tender Hooligan
- Posts : 3013
Join date : 2020-06-19
Location : bizarr nők hazája
- Post n°141
Re: Veštačka inteligencija
_____
Hong Kong dollar, Indian cents, English pounds and Eskimo pence
- Posts : 8095
Join date : 2020-09-07
- Post n°142
Re: Veštačka inteligencija
1. Ovo nema veze sa AI
2. Regulisati open source?
2. Regulisati open source?
_____
Sweet and Tender Hooligan
- Posts : 11623
Join date : 2018-03-03
Age : 36
Location : Hotline Rakovica
- Post n°143
Re: Veštačka inteligencija
[size=13]Hello! How can I assist you today?
[/size]
music video for the song golden brown by the stranglers. Where was it made?
[size=13]The music video for the song "Golden Brown" by The Stranglers was filmed in various locations in and around Paris, France. The video features shots of iconic Parisian landmarks, including the Eiffel Tower, the Louvre Museum, and the Seine River. The visual style of the video captures the charm and atmosphere of the city. It's worth noting that the song itself was released in 1981 and has since become one of The Stranglers' most popular and recognizable tracks.[/size]
_____
Sve čega ima na filmu, rekao sam, ima i na Zlatiboru.
~~~~~
Ne dajte da vas prevare! Sačuvajte svoje pojene!
- Posts : 11623
Join date : 2018-03-03
Age : 36
Location : Hotline Rakovica
- Post n°144
Re: Veštačka inteligencija
Btw, znam da je sniman u Egiptu. Interesovao me enterijer.
_____
Sve čega ima na filmu, rekao sam, ima i na Zlatiboru.
~~~~~
Ne dajte da vas prevare! Sačuvajte svoje pojene!
- Posts : 52531
Join date : 2017-11-16
- Post n°145
Re: Veštačka inteligencija
Odlicno je opisao spot. Jedino sto je to spot Duran Duran za A View to a Kill.
- Posts : 8095
Join date : 2020-09-07
- Post n°146
Re: Veštačka inteligencija
Danas je živih Strenglersa koliko i živih Bitlsa, a pesma je remek-delo i bila bi hit da Žanžak nije celom svetu otkrio o čemu se tu radi, a i mislim isto što i Hju tj. da su jedna od najpotcenjenijih grupa tog doba.
_____
Sweet and Tender Hooligan
- Posts : 11623
Join date : 2018-03-03
Age : 36
Location : Hotline Rakovica
- Post n°147
Re: Veštačka inteligencija
A vidiš, nije samo u Egiptu.
The video for "Golden Brown" was directed by Lindsey Clennell.[18] It depicts the band members both as explorers in an Arabian country in the 1920s and performers for a fictional "Radio Cairo".
In addition to the Pyramids, the video is intercut with stock footage of the Mir-i-Arab Madrasah in Bukhara, the Shah Mosque in Isfahan, the Great Sphinx, Feluccas sailing, Bedouins riding, and camel racing in the United Arab Emirates. The performance scenes were filmed in the Leighton House Museum in Holland Park, London, which was also used in the filming of the video for "Gold" by Spandau Ballet in 1983.[19]
_____
Sve čega ima na filmu, rekao sam, ima i na Zlatiboru.
~~~~~
Ne dajte da vas prevare! Sačuvajte svoje pojene!
- Posts : 7229
Join date : 2019-11-04
- Post n°148
Re: Veštačka inteligencija
Runaway AI Is an Extinction Risk, Experts Warn | WIRED
Runaway AI Is an Extinction Risk, Experts Warn
A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic.
LEADING FIGURES IN the development of artificial intelligence systems, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a statement warning that the technology they are building may someday pose an existential threat to humanity comparable to that of nuclear war and pandemics.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement, released today by the Center for AI Safety, a nonprofit.
The idea that AI might become difficult to control, and either accidentally or deliberately destroy humanity, has long been debated by philosophers. But in the past six months, following some surprising and unnerving leaps in the performance of AI algorithms, the issue has become a lot more widely and seriously discussed.
In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of Anthropic, a startup dedicated to developing AI with a focus on safety. Other signatories include Geoffrey Hinton and Yoshua Bengio—two of three academics given the Turing Award for their work on deep learning, the technology that underpins modern advances in machine learning and AI—as well as dozens of entrepreneurs and researchers working on cutting-edge AI problems.
“The statement is a great initiative,” says Max Tegmark, a physics professor at the Massachusetts Institute of Technology and the director of the Future of Life Institute, a nonprofit focused on the long-term risks posed by AI. In March, Tegmark’s Institute published a letter calling for a six-month pause on the development of cutting-edge AI algorithms so that the risks could be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.
Tegmark says he hopes the statement will encourage governments and the general public to take the existential risks of AI more seriously. “The ideal outcome is that the AI extinction threat gets mainstreamed, enabling everyone to discuss it without fear of mockery,” he adds.
Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists sparked by the creation of nuclear weapons. “We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” Hendrycks said in a quote issued along with his organization’s statement.
The current tone of alarm is tied to several leaps in the performance of AI algorithms known as large language models. These models consist of a specific kind of artificial neural network that is trained on enormous quantities of human-written text to predict the words that should follow a given string. When fed enough data, and with additional training in the form of feedback from humans on good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and apparent knowledge—even if their answers are often riddled with mistakes.
These language models have proven increasingly coherent and capable as they have been fed more data and computer power. The most powerful model created so far, OpenAI’s GPT-4, is able to solve complex problems, including ones that appear to require some forms of abstraction and common sense reasoning.
Language models had been getting more capable in recent years, but the release of ChatGPT last November drew public attention to the power—and potential problems—of the latest AI programs. ChatGPT and other advanced chatbots can hold coherent conversations and answer all manner of questions with the appearance of real understanding. But these programs also exhibit biases, fabricate facts, and can be goaded into behaving in strange and unpleasant ways.
Geoffrey Hinton, who is widely considered one of the most important and influential figures in AI, left his job at Google in April in order to speak about his newfound concern over the prospect of increasingly capable AI running amok.
National governments are becoming increasingly focused on the potential risks posed by AI and how the technology might be regulated. Although regulators are mostly worried about issues such as AI-generated disinformation and job displacement, there has been some discussion of existential concerns.
“We understand that people are anxious about how it can change the way we live. We are, too,” Sam Altman, OpenAI’s CEO, told the US Congress earlier this month. “If this technology goes wrong, it can go quite wrong.”
Not everyone is on board with the AI doomsday scenario, though. Yann LeCun, who won the Turing Award with Hinton and Bengio for the development of deep learning, has been critical of apocalyptic claims about advances in AI and has not signed the letter as of today.
Runaway AI Is an Extinction Risk, Experts Warn
A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic.
LEADING FIGURES IN the development of artificial intelligence systems, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a statement warning that the technology they are building may someday pose an existential threat to humanity comparable to that of nuclear war and pandemics.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement, released today by the Center for AI Safety, a nonprofit.
The idea that AI might become difficult to control, and either accidentally or deliberately destroy humanity, has long been debated by philosophers. But in the past six months, following some surprising and unnerving leaps in the performance of AI algorithms, the issue has become a lot more widely and seriously discussed.
In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of Anthropic, a startup dedicated to developing AI with a focus on safety. Other signatories include Geoffrey Hinton and Yoshua Bengio—two of three academics given the Turing Award for their work on deep learning, the technology that underpins modern advances in machine learning and AI—as well as dozens of entrepreneurs and researchers working on cutting-edge AI problems.
“The statement is a great initiative,” says Max Tegmark, a physics professor at the Massachusetts Institute of Technology and the director of the Future of Life Institute, a nonprofit focused on the long-term risks posed by AI. In March, Tegmark’s Institute published a letter calling for a six-month pause on the development of cutting-edge AI algorithms so that the risks could be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.
Tegmark says he hopes the statement will encourage governments and the general public to take the existential risks of AI more seriously. “The ideal outcome is that the AI extinction threat gets mainstreamed, enabling everyone to discuss it without fear of mockery,” he adds.
Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists sparked by the creation of nuclear weapons. “We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” Hendrycks said in a quote issued along with his organization’s statement.
The current tone of alarm is tied to several leaps in the performance of AI algorithms known as large language models. These models consist of a specific kind of artificial neural network that is trained on enormous quantities of human-written text to predict the words that should follow a given string. When fed enough data, and with additional training in the form of feedback from humans on good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and apparent knowledge—even if their answers are often riddled with mistakes.
These language models have proven increasingly coherent and capable as they have been fed more data and computer power. The most powerful model created so far, OpenAI’s GPT-4, is able to solve complex problems, including ones that appear to require some forms of abstraction and common sense reasoning.
Language models had been getting more capable in recent years, but the release of ChatGPT last November drew public attention to the power—and potential problems—of the latest AI programs. ChatGPT and other advanced chatbots can hold coherent conversations and answer all manner of questions with the appearance of real understanding. But these programs also exhibit biases, fabricate facts, and can be goaded into behaving in strange and unpleasant ways.
Geoffrey Hinton, who is widely considered one of the most important and influential figures in AI, left his job at Google in April in order to speak about his newfound concern over the prospect of increasingly capable AI running amok.
National governments are becoming increasingly focused on the potential risks posed by AI and how the technology might be regulated. Although regulators are mostly worried about issues such as AI-generated disinformation and job displacement, there has been some discussion of existential concerns.
“We understand that people are anxious about how it can change the way we live. We are, too,” Sam Altman, OpenAI’s CEO, told the US Congress earlier this month. “If this technology goes wrong, it can go quite wrong.”
Not everyone is on board with the AI doomsday scenario, though. Yann LeCun, who won the Turing Award with Hinton and Bengio for the development of deep learning, has been critical of apocalyptic claims about advances in AI and has not signed the letter as of today.
- Posts : 11623
Join date : 2018-03-03
Age : 36
Location : Hotline Rakovica
- Post n°149
Re: Veštačka inteligencija
Kako si pumpaju investicije. Dok u međuvremenu pruge pucaju na sve strane. Tačno će AI za ovu deceniju biti ono što su selfdriving cars i big data bili za prošlu.
_____
Sve čega ima na filmu, rekao sam, ima i na Zlatiboru.
~~~~~
Ne dajte da vas prevare! Sačuvajte svoje pojene!
- Posts : 7229
Join date : 2019-11-04
- Post n°150
Re: Veštačka inteligencija
Fwiw
.
.
Wild stuff here https://t.co/7MAOwemkUS
— Armand Domalewski (@ArmandDoma) June 1, 2023