- Spoiler:
Veštačka inteligencija
- Posts : 3013
Join date : 2020-06-19
Location : bizarr nők hazája
- Post n°101
Re: Veštačka inteligencija
kaže crobot ne zentaj nego idi uzmi radi nešto
_____
Hong Kong dollar, Indian cents, English pounds and Eskimo pence
- Posts : 3470
Join date : 2014-10-29
- Post n°102
Re: Veštačka inteligencija
Trenutne sanse da nam se to desi su otprilike slicne tome da nas satre neka nemiroljubiva vanzemaljska civilizacija.Cowboy wrote:
Mene ovde plasi druga stvar (jebes poslove) - a to je elektronska i virtuelna ranjivost covecanstva. Danas se rerne pale na putu kuci, sa mobilnog telefona. Kamere za nadzor su povezane na internet. Bankovni racuni. Automobili imaju autonomiju.
Pisalo se u SF zanru o tome onoliko - ono sto je fascinantno je brzina "razmisljanja" mashina, i ako nas nesto ubije to ce nas ubiti. Ako postoji mogucnost da se AI "osvesti" kao oni replikanti u Blade Runneru (za tu vrstu osvescivanja je neophodno pre svega iskustvo a zatim emotivan odnos prema tom iskustvu, dakle uspomene - kada imas uspomene onda si svestan), bice bukvalno stvar sekundi sta ce se desiti sa svim sistemima sa kojim a je AI povezan posle tog osvescenja.
Scenario koji mene jako plasi je da ce nas AI unistiti sirenjem dezinformacija. Svi LLMs trenutno su skloni haluciniranju, ali su i pre svega, unconcerned kad je istina i tacnost u pitanju. A ispaljuju odgovore sa high confidence.
Ja drzim svoj potpis vec mesecima, od kako je onaj bivsi google inzinjer utripovao da prica sa svesnim bicem.
Kako crno ni iz cega - iz gomile i gomile training data i human in the loop "ispravljaca" tokom trening faze.Cowboy wrote:Sve to stoji.
Ali ono sto (mene) iznenadjuje je precizna logika u odgovorima na pitanja koja se ticu kreativnosti. Dakle ni iz cega ti daje odgovor na nesto, iz ciste apstrakcije.
_____
you cannot simply trust a language model when it tells you how it feels
- Posts : 41623
Join date : 2012-02-12
Location : wife privilege
- Post n°103
Re: Veštačka inteligencija
Ово ме подсећа на парадокс који сам својевремено замислио, и који је могао лако и да се деси, само да је неком пао на памет на време.
Овако: др Сигмунд Фројд напише четири књиге. Неко прочита и проучи све четири, и измисли нови синдром који иначе не постоји, али би се фино уклопио у некакву пету књигу. Појави се као пацијент код Фројда и крене да сере онако како је замислио. Фантазира, брљави, све бизарније једно од другог, и нуто чуда, Фројд успева да уочи те појаве и код других, и напише пету књигу на основу њих. Историја више не изгледа исто.
Е сад можемо да замислимо сто лудака који ће тако да се зајебавају са ЧедомЏ, и да притом уопште не будемо сигурни да се то већ није десило.
Овако: др Сигмунд Фројд напише четири књиге. Неко прочита и проучи све четири, и измисли нови синдром који иначе не постоји, али би се фино уклопио у некакву пету књигу. Појави се као пацијент код Фројда и крене да сере онако како је замислио. Фантазира, брљави, све бизарније једно од другог, и нуто чуда, Фројд успева да уочи те појаве и код других, и напише пету књигу на основу њих. Историја више не изгледа исто.
Е сад можемо да замислимо сто лудака који ће тако да се зајебавају са ЧедомЏ, и да притом уопште не будемо сигурни да се то већ није десило.
_____
cousin for roasting the rakija
И кажем себи у сну, еј бре коњу па ти ни немаш озвучење, имаш оне две кутијице око монитора, видећеш кад се пробудиш...
- Guest
- Post n°104
Re: Veštačka inteligencija
konjski nil wrote:kaže crobot ne zentaj nego idi uzmi radi nešto
- Spoiler:
ala ti je dokaz...naravno da će AI da ti priča umirujuće priče i da te geslajtuje.
- Posts : 1049
Join date : 2012-02-11
- Post n°105
Re: Veštačka inteligencija
konjski nil wrote:kaže crobot ne zentaj nego idi uzmi radi nešto
- Spoiler:
KAZE LEGITIMNA BRIGA!!!
@Cousin Billy naravno, prvo sam to pomislio. Plus je servilan, pun razumevanja, zeli da pomogne.
- Posts : 41623
Join date : 2012-02-12
Location : wife privilege
- Post n°106
Re: Veštačka inteligencija
Ево опет мало оног мог...
The rule came under sharp attack on two grounds. (1) The conduct is intentional, extreme, and outrageous, and Jim suffered severe emotional distress suffered by a normal person in the distilled spirits.
(World-Wide) ATI may well have been included. That exercise of discretion by the directors, shareholders have no power to regulate interstate commerce would be "incomplete without the authority of the United States. They must secondly show that D knew that P was entitled to the rights of citizens of possessions of United States.
World Intellectual Property Organization and the Secretary-General of the United States; and their requisitions, if conformable to the standard required of a woman that D was not obligated to take possession in lieu of monetary damages are to be used by a government agent always constitute entrapment? No. Is entrapment to be determined under this section" in par. (3)(B).
_____
cousin for roasting the rakija
И кажем себи у сну, еј бре коњу па ти ни немаш озвучење, имаш оне две кутијице око монитора, видећеш кад се пробудиш...
- Posts : 3470
Join date : 2014-10-29
- Post n°107
Re: Veštačka inteligencija
https://www.latimes.com/business/technology/story/2023-03-31/column-afraid-of-ai-the-startups-selling-it-want-you-to-be
"[size=42]But the hand-wringing over an all-powerful “artificial general intelligence” and the incendiary hype tends to obscure those nearer-term types of concerns. AI ethicists and researchers like [/size]Timnit Gebru[size=42] and [/size]Meredith Whittaker[size=42] have been [/size]shouting into the void[size=42] that an abstract fear of an imminent SkyNet misses the forest for the trees."[/size]
"[size=42]But the hand-wringing over an all-powerful “artificial general intelligence” and the incendiary hype tends to obscure those nearer-term types of concerns. AI ethicists and researchers like [/size]Timnit Gebru[size=42] and [/size]Meredith Whittaker[size=42] have been [/size]shouting into the void[size=42] that an abstract fear of an imminent SkyNet misses the forest for the trees."[/size]
_____
you cannot simply trust a language model when it tells you how it feels
- Posts : 3470
Join date : 2014-10-29
- Post n°108
Re: Veštačka inteligencija
Drugarica mi posalje neki video gde Istok Pavlovic kao vrlo clever koristi ChatGPT, od pre nekoliko dana. Komentari na video su odlican presek trenutnog hajpa, nerazumevanja, i svega ostalog oko nove tehnologije.
_____
you cannot simply trust a language model when it tells you how it feels
- Posts : 8095
Join date : 2020-09-07
- Post n°109
Re: Veštačka inteligencija
Gde si Siniša Mali da ovo vidiš?
latextai.com
latextai.com
_____
Sweet and Tender Hooligan
- Posts : 41623
Join date : 2012-02-12
Location : wife privilege
- Post n°110
Re: Veštačka inteligencija
Notxor wrote:Gde si Siniša Mali da ovo vidiš?
latextai.com
Latex what?
_____
cousin for roasting the rakija
И кажем себи у сну, еј бре коњу па ти ни немаш озвучење, имаш оне две кутијице око монитора, видећеш кад се пробудиш...
- Posts : 8095
Join date : 2020-09-07
- Post n°111
Re: Veštačka inteligencija
maznuto sa bencha
_____
Sweet and Tender Hooligan
- Guest
- Post n°112
Re: Veštačka inteligencija
Ovo mi postaje krajnje nesimpatično.
Dobro radi posao, ali ne hvala, što se mene tiče, bar ne u ove svrhe.
Dobra AI kopija benda Oasis iz njihovog poznog, lošeg perioda.
Bilo mi je interesantno nekoliko sekundi.
Jedan britanski bend iskoristio je veštačku inteligenciju (AI) da snimi pesme koje zvuče kao Oasis kada bi se ćlanovi tog sastava ponovo okupii i izdali novi album 2023.
Album sa osam pesama – pametno nazvan „AISIS“ – snimio je indi bend Breezer, koji je na sopstvene pesme dodao AI verziju glasa Lijama Galagera, prenosi NME.
https://n1info.rs/showbiz/ne-moramo-da-cekamo-da-se-galageri-pomire-ai-napravila-novi-album-oasis-audio/
https://www.youtube.com/watch?v=uJLKzYYB-4U
Dobro radi posao, ali ne hvala, što se mene tiče, bar ne u ove svrhe.
Dobra AI kopija benda Oasis iz njihovog poznog, lošeg perioda.
Bilo mi je interesantno nekoliko sekundi.
Jedan britanski bend iskoristio je veštačku inteligenciju (AI) da snimi pesme koje zvuče kao Oasis kada bi se ćlanovi tog sastava ponovo okupii i izdali novi album 2023.
Album sa osam pesama – pametno nazvan „AISIS“ – snimio je indi bend Breezer, koji je na sopstvene pesme dodao AI verziju glasa Lijama Galagera, prenosi NME.
https://n1info.rs/showbiz/ne-moramo-da-cekamo-da-se-galageri-pomire-ai-napravila-novi-album-oasis-audio/
https://www.youtube.com/watch?v=uJLKzYYB-4U
- Posts : 19200
Join date : 2014-12-12
- Post n°113
Re: Veštačka inteligencija
Što bi rekla pok. Rada Savićević u onoj reklami: "Liči."
- Posts : 19200
Join date : 2014-12-12
- Post n°114
Re: Veštačka inteligencija
Ovo je inače radio i Rastko Ćirić sa Bitlsima.
- Posts : 52531
Join date : 2017-11-16
- Post n°115
Re: Veštačka inteligencija
Na imaginarni album Oasisa je na Tviteru reagovao i sam Lijam Galager, navodeći da taj materijal zvuči bolje od ostalih gluposti koje se objavljuju ovih dana.
- Posts : 28265
Join date : 2015-03-20
- Post n°116
Re: Veštačka inteligencija
zašto pišeš lijam a misliš ljam
_____
#FreeFacu
Дакле, волео бих да се ЈСД Партизан угаси, али не и да сви (или било који) гробар умре.
- Posts : 41623
Join date : 2012-02-12
Location : wife privilege
- Post n°117
Re: Veštačka inteligencija
beatakeshi wrote:Ovo je inače radio i Rastko Ćirić sa Bitlsima.
Ал' из главе, органић. То оно са Скробоњом? Покојни зет је само рекао „како ја не знам кад су Битлси снимили овај албум?“.
_____
cousin for roasting the rakija
И кажем себи у сну, еј бре коњу па ти ни немаш озвучење, имаш оне две кутијице око монитора, видећеш кад се пробудиш...
- Posts : 11623
Join date : 2018-03-03
Age : 36
Location : Hotline Rakovica
- Post n°119
Re: Veštačka inteligencija
https://pescanik.net/lazno-obecanje-cetbotova/
_____
Sve čega ima na filmu, rekao sam, ima i na Zlatiboru.
~~~~~
Ne dajte da vas prevare! Sačuvajte svoje pojene!
- Posts : 3620
Join date : 2018-07-03
- Post n°120
Re: Veštačka inteligencija
_____
"Sisaj kurac, Boomere. Spletkario si i nameštao ban pa se sad izvlačiš. Radiša je format a ti si mali iskompleksirani miš. Katastrofa za Burundi čoveče.
A i deluje da te napustio drugar u odsudnom trenutku pa te spašavaju ova tovarka što vrv ni ne dismr na ribu, to joj se gadi, i ovaj južnjak koji o niškim kafanama čita na forumu. Prejaka šarža." - Monsier K.
- Posts : 7229
Join date : 2019-11-04
- Post n°121
Re: Veštačka inteligencija
I Cloned Myself With AI. She Fooled My Bank and My Family.
By Joanna Stern
April 28, 2023 5:32 am ET
The good news about AI Joanna: She never loses her voice, she has outstanding posture and not even a convertible driving 120 mph through a tornado could mess up her hair.
The bad news: She can fool my family and trick my bank.
Maybe you’ve played around with chatbots like OpenAI’s ChatGPT and Google’s Bard, or image generators like Dall-E. If you thought they blurred the line between AI and human intelligence, you ain’t seen—or heard—nothing yet.
Over the past few months, I’ve been testing Synthesia, a tool that creates artificially intelligent avatars from recorded video and audio (aka deepfakes). Type in anything and your video avatar parrots it back.
Since I do a lot of voice and video work, I thought this could make me more productive, and take away some of the drudgery. That’s the AI promise, after all. So I went to a studio and recorded about 30 minutes of video and nearly two hours of audio that Synthesia would use to train my clone. A few weeks later, AI Joanna was ready.
Then I attempted the ultimate day off, Ferris Bueller style. Could AI me—paired with ChatGPT-generated text—replace actual me in videos, meetings and phone calls? It was…eye-opening or, dare I say, AI-opening. (Let’s just blame AI Joanna for my worst jokes.)
Eventually AI Joanna might write columns and host my videos. For now, she’s at her best illustrating the double-edged sword of generative-AI voice and video tools.
My video avatar looks like an avatar.
Video is a lot of work. Hair, makeup, wardrobe, cameras, lighting, microphones. Synthesia promises to eradicate that work, and that’s why corporations already use it. You know those boring compliance training videos? Why pay actors to star in a live-action version when AI can do it all? Synthesia charges $1,000 a year to create and maintain a custom avatar, plus an additional monthly subscription fee. It offers stock avatars for a lower monthly cost.
I asked ChatGPT to generate a TikTok script about an iOS tip, written in the voice of Joanna Stern. I pasted it into Synthesia, clicked “generate” and suddenly “I” was talking. It was like looking at my reflection in a mirror, albeit one that removes hand gestures and facial expressions. For quick sentences, the avatar can be quite convincing. The longer the text, the more her bot nature comes through.
On TikTok, where people have the attention span of goldfish, those computer-like attributes are less noticeable. Still, some quickly picked up on it. For the record, I would rather eat live eels than utter the phrase “TikTok fam” but AI me had no problem with it.
The bot-ness got very obvious on work video calls. I downloaded clips of her saying common meeting remarks (“Hey everyone!” “Sorry, I was muted.”) then used software to pump them into Google Meet. Apparently AI Joanna’s perfect posture and lack of wit were dead giveaways.
All this will get better, though. Synthesia has some avatars in beta that can nod up and down, raise their eyebrows and more.
My AI voice sounds a lot like me.
When my sister’s fish died, could I have called with condolences? Yes. On a phone interview with Snap CEO Evan Spiegel, could I have asked every question myself? Sure. But in both cases, my AI voice was a convincing stand-in. At first.
I didn’t use Synthesia’s voice clone for those calls. Instead, I used one generated by ElevenLabs, an AI speech-software developer. My producer Kenny Wassus gathered about 90 minutes of my voice from previous videos and we uploaded the files to the tool—no studio visit needed. In under two minutes, it cloned my voice. In ElevenLabs’s web-based tool, type in any text, click Generate, and within seconds “my” voice says it aloud. Creating a voice clone with ElevenLabs starts at $5 a month.
Compared with Synthesia Joanna, the ElevenLabs me sounds more humanlike, with better intonations and flow. Listen to the test audio here:
My sister, whom I call several times a week, said the bot sounded just like me, but noticed the bot didn’t pause to take breaths. When I called my dad and asked for his Social Security number, he only knew something was up because it sounded like a recording of me.
The potential for misuse is real.
The ElevenLabs voice was so good it fooled my Chase credit card’s voice biometric system.
I cued AI Joanna up with several things I knew Chase would ask, then dialed customer service. At the biometric step, when the automated system asked for my name and address, AI Joanna responded. Hearing my bot’s voice, the system recognized it as me and immediately connected to a representative. When our video intern called and did his best Joanna impression, the automated system asked for further verification.
A Chase spokeswoman said the bank uses voice biometrics, along with other tools, to verify callers are who they say they are. She added that the feature is meant for customers to quickly and securely identify themselves, but to complete transactions and other financial requests, customers must provide additional information.
What’s most worrying: ElevenLabs made a very good clone without much friction. All I had to do was click a button saying I had the “necessary rights or consents” to upload audio files and create the clone, and that I wouldn’t use it for fraudulent purposes.
That means anyone on the internet could take hours of my voice—or yours, or Joe Biden’s or Tom Brady’s—to save and use. The Federal Trade Commission is already warning about AI-voice related scams.
Synthesia requires that the audio and video include verbal consent, which I did when I filmed and recorded with the company.
ElevenLabs only allows cloning in paid accounts, so any use of a cloned voice that breaks company policies can be traced to an account holder, company co-founder Mati Staniszewski told me. The company is working on an authentication tool so people can upload any audio to check if it was created using ElevenLabs technology.
Both systems allowed me to generate some horrible things in my voice, including death threats.
In Sythesia’s web tool, you type in what you want your avatar to say.
PHOTO: JOANNA STERN/THE WALL STREET JOURNAL, SYNTHESIA
A Synthesia spokesman said my account was designated for use with a news organization, which means it can say words and phrases that might otherwise be filtered. The company said its moderators flagged and deleted my problematic phrases later on. When my account was changed to the standard type, I was no longer able to generate those same phrases.
Mr. Staniszewski said ElevenLabs can identify all content made with its software. If content breaches the company’s terms of service, he added, ElevenLabs can ban its originating account and, in case of law breaking, assist authorities.
This stuff is hard to spot.
When I asked Hany Farid, a digital-forensics expert at the University of California, Berkeley, how we can spot synthetic audio and video, he had two words: good luck.
“Not only can I generate this stuff, I can carpet-bomb the internet with it,” he said, adding that you can’t make everyone an AI detective.
Sure, my video clone is clearly not me, but it will only get better. And if my own parents and sister can’t really hear the difference in my voice, can I expect others to?
I got a bit of hope from hearing about the Adobe-led Content Authenticity Initiative. Over 1,000 media and tech companies, academics and more aim to create an embedded “nutrition label” for media. Photos, videos and audio on the internet might one day come with verifiable information attached. Synthesia is a member of the initiative.
The work dream: an AI you can send to video calls. Sadly, everyone knew she was a fake.
PHOTO: JOANNA STERN/THE WALL STREET JOURNAL
I feel good about being a human.
Unlike AI Joanna who never smiles, real Joanna had something to smile about after this. ChatGPT generated text lacking my personality and expertise. My video clone was lacking the things that make me me. And while my video producer likes using my AI voice in early edits to play with timing, my real voice has more energy, emotion and cadence.
Will AI get better at all of that? Absolutely. But I also plan to use these tools to afford me more time to be a real human. Meanwhile, I’m at least sitting up a lot straighter in meetings now.
Our columnist replaced herself with AI voice and video to see how humanlike the tech can be. The results were eerie.
By Joanna Stern
April 28, 2023 5:32 am ET
The good news about AI Joanna: She never loses her voice, she has outstanding posture and not even a convertible driving 120 mph through a tornado could mess up her hair.
The bad news: She can fool my family and trick my bank.
Maybe you’ve played around with chatbots like OpenAI’s ChatGPT and Google’s Bard, or image generators like Dall-E. If you thought they blurred the line between AI and human intelligence, you ain’t seen—or heard—nothing yet.
Over the past few months, I’ve been testing Synthesia, a tool that creates artificially intelligent avatars from recorded video and audio (aka deepfakes). Type in anything and your video avatar parrots it back.
Since I do a lot of voice and video work, I thought this could make me more productive, and take away some of the drudgery. That’s the AI promise, after all. So I went to a studio and recorded about 30 minutes of video and nearly two hours of audio that Synthesia would use to train my clone. A few weeks later, AI Joanna was ready.
Then I attempted the ultimate day off, Ferris Bueller style. Could AI me—paired with ChatGPT-generated text—replace actual me in videos, meetings and phone calls? It was…eye-opening or, dare I say, AI-opening. (Let’s just blame AI Joanna for my worst jokes.)
Eventually AI Joanna might write columns and host my videos. For now, she’s at her best illustrating the double-edged sword of generative-AI voice and video tools.
My video avatar looks like an avatar.
Video is a lot of work. Hair, makeup, wardrobe, cameras, lighting, microphones. Synthesia promises to eradicate that work, and that’s why corporations already use it. You know those boring compliance training videos? Why pay actors to star in a live-action version when AI can do it all? Synthesia charges $1,000 a year to create and maintain a custom avatar, plus an additional monthly subscription fee. It offers stock avatars for a lower monthly cost.
I asked ChatGPT to generate a TikTok script about an iOS tip, written in the voice of Joanna Stern. I pasted it into Synthesia, clicked “generate” and suddenly “I” was talking. It was like looking at my reflection in a mirror, albeit one that removes hand gestures and facial expressions. For quick sentences, the avatar can be quite convincing. The longer the text, the more her bot nature comes through.
On TikTok, where people have the attention span of goldfish, those computer-like attributes are less noticeable. Still, some quickly picked up on it. For the record, I would rather eat live eels than utter the phrase “TikTok fam” but AI me had no problem with it.
The bot-ness got very obvious on work video calls. I downloaded clips of her saying common meeting remarks (“Hey everyone!” “Sorry, I was muted.”) then used software to pump them into Google Meet. Apparently AI Joanna’s perfect posture and lack of wit were dead giveaways.
All this will get better, though. Synthesia has some avatars in beta that can nod up and down, raise their eyebrows and more.
My AI voice sounds a lot like me.
When my sister’s fish died, could I have called with condolences? Yes. On a phone interview with Snap CEO Evan Spiegel, could I have asked every question myself? Sure. But in both cases, my AI voice was a convincing stand-in. At first.
I didn’t use Synthesia’s voice clone for those calls. Instead, I used one generated by ElevenLabs, an AI speech-software developer. My producer Kenny Wassus gathered about 90 minutes of my voice from previous videos and we uploaded the files to the tool—no studio visit needed. In under two minutes, it cloned my voice. In ElevenLabs’s web-based tool, type in any text, click Generate, and within seconds “my” voice says it aloud. Creating a voice clone with ElevenLabs starts at $5 a month.
Compared with Synthesia Joanna, the ElevenLabs me sounds more humanlike, with better intonations and flow. Listen to the test audio here:
My sister, whom I call several times a week, said the bot sounded just like me, but noticed the bot didn’t pause to take breaths. When I called my dad and asked for his Social Security number, he only knew something was up because it sounded like a recording of me.
The potential for misuse is real.
The ElevenLabs voice was so good it fooled my Chase credit card’s voice biometric system.
I cued AI Joanna up with several things I knew Chase would ask, then dialed customer service. At the biometric step, when the automated system asked for my name and address, AI Joanna responded. Hearing my bot’s voice, the system recognized it as me and immediately connected to a representative. When our video intern called and did his best Joanna impression, the automated system asked for further verification.
A Chase spokeswoman said the bank uses voice biometrics, along with other tools, to verify callers are who they say they are. She added that the feature is meant for customers to quickly and securely identify themselves, but to complete transactions and other financial requests, customers must provide additional information.
What’s most worrying: ElevenLabs made a very good clone without much friction. All I had to do was click a button saying I had the “necessary rights or consents” to upload audio files and create the clone, and that I wouldn’t use it for fraudulent purposes.
That means anyone on the internet could take hours of my voice—or yours, or Joe Biden’s or Tom Brady’s—to save and use. The Federal Trade Commission is already warning about AI-voice related scams.
Synthesia requires that the audio and video include verbal consent, which I did when I filmed and recorded with the company.
ElevenLabs only allows cloning in paid accounts, so any use of a cloned voice that breaks company policies can be traced to an account holder, company co-founder Mati Staniszewski told me. The company is working on an authentication tool so people can upload any audio to check if it was created using ElevenLabs technology.
Both systems allowed me to generate some horrible things in my voice, including death threats.
In Sythesia’s web tool, you type in what you want your avatar to say.
PHOTO: JOANNA STERN/THE WALL STREET JOURNAL, SYNTHESIA
A Synthesia spokesman said my account was designated for use with a news organization, which means it can say words and phrases that might otherwise be filtered. The company said its moderators flagged and deleted my problematic phrases later on. When my account was changed to the standard type, I was no longer able to generate those same phrases.
Mr. Staniszewski said ElevenLabs can identify all content made with its software. If content breaches the company’s terms of service, he added, ElevenLabs can ban its originating account and, in case of law breaking, assist authorities.
This stuff is hard to spot.
When I asked Hany Farid, a digital-forensics expert at the University of California, Berkeley, how we can spot synthetic audio and video, he had two words: good luck.
“Not only can I generate this stuff, I can carpet-bomb the internet with it,” he said, adding that you can’t make everyone an AI detective.
Sure, my video clone is clearly not me, but it will only get better. And if my own parents and sister can’t really hear the difference in my voice, can I expect others to?
I got a bit of hope from hearing about the Adobe-led Content Authenticity Initiative. Over 1,000 media and tech companies, academics and more aim to create an embedded “nutrition label” for media. Photos, videos and audio on the internet might one day come with verifiable information attached. Synthesia is a member of the initiative.
The work dream: an AI you can send to video calls. Sadly, everyone knew she was a fake.
PHOTO: JOANNA STERN/THE WALL STREET JOURNAL
I feel good about being a human.
Unlike AI Joanna who never smiles, real Joanna had something to smile about after this. ChatGPT generated text lacking my personality and expertise. My video clone was lacking the things that make me me. And while my video producer likes using my AI voice in early edits to play with timing, my real voice has more energy, emotion and cadence.
Will AI get better at all of that? Absolutely. But I also plan to use these tools to afford me more time to be a real human. Meanwhile, I’m at least sitting up a lot straighter in meetings now.
- Posts : 41623
Join date : 2012-02-12
Location : wife privilege
- Post n°122
Re: Veštačka inteligencija
_____
cousin for roasting the rakija
И кажем себи у сну, еј бре коњу па ти ни немаш озвучење, имаш оне две кутијице око монитора, видећеш кад се пробудиш...
- Posts : 7229
Join date : 2019-11-04
- Post n°123
Re: Veštačka inteligencija
AI video has started to produce mindblowing results and could eventually disrupt Hollywood. (part 3)
— Nathan Lands (@NathanLands) May 1, 2023
Here are the best AI videos I've found:
- Posts : 19200
Join date : 2014-12-12
- Post n°124
Re: Veštačka inteligencija
To bi bilo to.
Kako bi tek izgledao AI Burundi.
Kako bi tek izgledao AI Burundi.
- Posts : 2180
Join date : 2020-06-19
- Post n°125
Re: Veštačka inteligencija
https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning[size=41]Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of machine learning[/size]
Cccccc. Kad se patka više ne diže, sex postaje precenjen