×

When do you think AGI will happen and when do you think AI will become sentient? by 96suluman in ArtificialInteligence

[–]Adeldor 0 points1 point  (0 children)

Yes. I'm wondering if you meant sentience will precede ASI. As mentioned, I believe AGI will precede sentience.

When do you think AGI will happen and when do you think AI will become sentient? by 96suluman in ArtificialInteligence

[–]Adeldor 4 points5 points  (0 children)

That's a separate subject unrelated to the possibility of inorganic systems exhibiting sentience.

"I hate AIs and the people who create them ..."

When you declare your hatred like this, it seems the conversation is no longer useful. So I'll leave it there with you.

When do you think AGI will happen and when do you think AI will become sentient? by 96suluman in ArtificialInteligence

[–]Adeldor 3 points4 points  (0 children)

Building something to have sentience is a different proposition to building something that behaves like it has sentience.

As explored by Turing, if the black box response of the system is indistinguishable, then the difference between "have" and "behave like" becomes moot. The mimicry itself would exhibit sentience.

When do you think AGI will happen and when do you think AI will become sentient? by 96suluman in ArtificialInteligence

[–]Adeldor 4 points5 points  (0 children)

Lmao and here you are pretending that inorganic compounds can have sentience.

What is extraordinary about organic compounds that prevents the emulation of reactions such as neuron transfer functions or synaptic strengths?

When do you think AGI will happen and when do you think AI will become sentient? by 96suluman in ArtificialInteligence

[–]Adeldor 0 points1 point  (0 children)

The apparent certainty of your position regarding sentience is interesting. Two immediate points come to mind:

  • Define sentience.

  • What is special about "living things" that cannot be mirrored/emulated?

JWST Discovers This after Studying Kilonova!! by [deleted] in space

[–]Adeldor [score hidden]  (0 children)

No. I'm saying it's a click-bait title(!!) It might be the best video on the subject, but so often I find following such results in a waste of time. So I don't bother now. If the discovery is important enough there'll be other reports from less sensational sources.

JWST Discovers This after Studying Kilonova!! by [deleted] in space

[–]Adeldor [score hidden]  (0 children)

It discovered a "this"? Not seen one of those.

Not clicking on click-bait links, either.

Voyager 2 is currently unable to receive commands or transmit data back to Earth by amiralul in space

[–]Adeldor [score hidden]  (0 children)

I recall such a thing happened to the Viking 1 lander. In that case communication was not recovered. Crossing fingers here.

You Can Now Build Your Own AI Girlfriend by saffronfan in ChatGPT

[–]Adeldor 0 points1 point  (0 children)

I'd argue using the word "girlfriend" for a chatbot is the issue, and there being such a high demand suggests deep problems abound.

Put ChatGPT on your website in seconds with this tool. No sign-up required. by brian_the_ocd_guy in ChatGPT

[–]Adeldor 1 point2 points  (0 children)

I'm attempting to use your web site, specifically the chat box provided on it (the one with the two preset questions). Is there some additional step or configuration I missed?

Put ChatGPT on your website in seconds with this tool. No sign-up required. by brian_the_ocd_guy in ChatGPT

[–]Adeldor 1 point2 points  (0 children)

Got past that. However now I'm getting the error message below to any statement, even the two preset questions. Is there a training data set that must be loaded?

"Sorry, i don't know answer to this question"

You Can Now Build Your Own AI Girlfriend by saffronfan in ChatGPT

[–]Adeldor 1 point2 points  (0 children)

"1.7 million downloads of a16z's Character.AI friendship bots in one week."

Wow. This being so popular says a great deal about "today," much of it troubling.

Put ChatGPT on your website in seconds with this tool. No sign-up required. by brian_the_ocd_guy in ChatGPT

[–]Adeldor 1 point2 points  (0 children)

The chat window provided on this web site isn't working for me. Is there some setup or configuration I must do to try the example? Here's the error message:

"Cannot read properties of undefined (reading 'url')"

GPT-4 vs. Claude 2 vs. Llama 2 - Comparing LLM Models VOL:1 by Odd_Opening5473 in ChatGPT

[–]Adeldor 0 points1 point  (0 children)

Here's Pi's response:

Both sentences refer to the same action, but the focus is different. In the first sentence, John is the focus, while in the second sentence, the dog is the focus. 180 characters done!

James Cameron remarks that Ai isn't even close to human level... by Relevant-Blood-8681 in singularity

[–]Adeldor 2 points3 points  (0 children)

Moore's law is indeed slowing down - for the dimensions of features on silicon. That says nothing for eg layered stacking and algorithmic improvements, especially when it comes to training. I've every confidence the evolution of such AI systems is in its infancy.

Yes, pre-trained is the key statement (which I emphasized with my 2nd paragraph), but it doesn't invalidate the identification of Cameron's inaccuracy. FWIW, my name's on a backpropagation neural net patent in the US. I have some insight into how different the system loads are between adjusting weights through backpropagation vs calculating next vectors.

UFO hearing live stream: Whistleblower gives evidence at US House panel hearing - BBC News by coinfanking in space

[–]Adeldor 0 points1 point  (0 children)

I grant you that's interesting. Thank you.

Still, once again it's indistinct/fuzzy (as you imply). And Occam's Razor suggests there are many more mundane explanations for this before one leaps to alien craft, such as secret military device (it is over Israel, after all) or hoax (what is the credibility of the source).

James Cameron remarks that Ai isn't even close to human level... by Relevant-Blood-8681 in singularity

[–]Adeldor 4 points5 points  (0 children)

Maybe I'm naive, but I expected Cameron to be smart enough to understand exponential growth. Besides, he's wrong with his example. A pre-trained LLM can be run on a home PC with decent GPU, giving a "sort of human-like" conversation without consuming megawatts or requiring tons of hardware.

I grant that training does require those sorts of resources. Not being "in the game," I don't blame him too much for confusing the two, although perhaps he could be a little more humble in giving his opinion.

Is it possible DART destroyed the asteroid it hit? by Watchyobackistan in space

[–]Adeldor 2 points3 points  (0 children)

No. After the impact the two bodies were observed still to exist, with the change in Dimorphos' orbital period about Didymos being quite precisely measured.

UFO hearing live stream: Whistleblower gives evidence at US House panel hearing - BBC News by coinfanking in space

[–]Adeldor 4 points5 points  (0 children)

Then we are back to there being no clear evidence. Just hearsay in a congressional hearing on a subject that has a long history of false claims.

UFO hearing live stream: Whistleblower gives evidence at US House panel hearing - BBC News by coinfanking in space

[–]Adeldor 1 point2 points  (0 children)

Unidentified Anomalous Phenomena.

Not only does the BBC tag title of the link in this post use "UFO," it makes absolutely no difference to the argument regarding clear evidence.

You sound like you have put zero effort into understanding what you're talking about.

You have no idea what effort I've put into this and your slight in lieu of an effective counter - such as tangible evidence or clear, unambiguous images - is noteworthy.

I'll leave it there with you. Have a good day.

UFO hearing live stream: Whistleblower gives evidence at US House panel hearing - BBC News by coinfanking in space

[–]Adeldor 5 points6 points  (0 children)

No. It reinforces my point. There are no clear images, despite the billions of cameras that have been in the hands of the global populace for years, and no clear images released by any military. Not one.