Building a Dyson sphere using ChatGPT

by Ashutosh Jogalekar

Artist’s rendering of a Dyson sphere (Image credit)

In 1960, physicist Freeman Dyson published a paper in the journal Science describing how a technologically advanced civilization would make its presence known. Dyson’s assumption was that whether an advanced civilization signals its intelligence or hides it from us, it would not be able to hide the one thing that’s essential for any civilization to grow – energy. Advanced civilizations would likely try to capture all the energy of their star to grow.

For doing this, borrowing an idea from Olaf Stapledon, Dyson imagined the civilization taking apart a number of the planets and other material in their solar system to build a shell of material that would fully enclose their planet, thus capturing far more of the heat than what they could otherwise. This energy-capturing sphere would radiate its enormous waste heat out in the infrared spectrum. So one way to find out alien civilizations would be to look for signatures of this infrared radiation in space. Since then these giant spheres – later sometimes imagined as distributed panels rather than single continuous shells – that can be constructed by advanced civilizations to capture their star’s energy have become known as Dyson spheres. They have been featured in science fiction books and TV shows including Star Trek.

I asked AI engine chatGPT to build me a hypothetical 2 meter thick Dyson sphere at a distance of 2 AU (~300 million kilometers). I wanted to see how efficiently chatGPT harnesses information from the internet to give me specifics and how well its large language model (LLM) of computation understood what I was saying. Read more »

Monday Poem

Graduations

I have before me a list of extensions
without names. But it seems not to go on forever,
because of horizon which, with
the slickness of a blade, a knife of
limitation, slashes time in two

I’ve ticked-off the list through many graduations,
sometimes with honors, sometimes
smeared against a wall of dreams since I’m
a creature of their reckless persistence,
their humiliations

But that irresistible incision far off, though closer now,
still draws me regardless of my self-shaped limitations,
my turnings from what is true

Jim Culleny, 4/16/23

On the Importance of Community

by Jonathan Kujawa

In the movies the mathematician is always a lone genius, possibly mad, and uninterested in socializing with other people. Or they are Jeff Goldblum — a category of its own. While it is true that doing mathematics involves a certain amount of thinking alone, I’ve frequently argued here at 3QD that math is a fundamentally social endeavor.

This weekend I was reminded again of this fundamental fact. Now that we are emerging from covid, in-person math conferences have returned in abundance. I am currently in Cincinnati attending one of the American Mathematical Society’s regional math conferences. It has been wonderful to hear many great talks about cutting edge research. The real treat, though, is the chance to see old friends and meet new people, especially the grad students and young researchers who’ve come into the field in the last few years.

In chats between talks, you learn about the problems people are pondering; their private shortcuts to how to think about certain topics; how their universities handle teaching, budgets, and post-covid life; and, of course, the latest professional gossip. It is nice to be reminded that you are part of a community of like-minded folks.

Dr. Whitaker

One of the tentpole events of the conference was the Einstein Public Lecture. They are held annually at one of the AMS meetings and were started to celebrate the one-hundredth anniversary of Albert Einstein’s annus mirabilis. In this case, the speaker was Nathaniel Whitaker, a University of Massachusetts, Amherst, professor. While the mathematical topic of his presentation was his work on finding approximate solutions and numerical estimates to the sorts of partial differential equations which arise in the study of fluid flows, the real story he wanted to tell was his journey from the segregated schools of 1950s Virginia to his role in the 2020s as a leader of a large research university [1]. Read more »

Activism or Aestheticism: Art in the Anthropocene

by Ethan Seavey

Art Paris Art Fair 2022. Photo by author.

In the growing sector of the contemporary art world which focuses on environmental issues, participants in the art (artists, critics, and the general audience) disagree on the intention of each work of art: does it merit only aesthetic praise, or is it a successful work of climate activism? In my brief internship at Art of Change 21, a French nonprofit association at the intersection of contemporary art and the environment, I frequently encountered this dichotomy. At Art Paris 2022, the association hosted an exhibition centered around artists who deal with environmental themes. My goal is to contextualize some of the artworks present at this exhibition, based on critical theory as well as my own experiences.

When an artist depicts an environmental issue, they want to bring attention to it, and for many in the art world, this attention is enough to be considered powerful activism. In a study for the Norwegian University of Science and Technology, Laura Kim Sommer and Christian Andreas Klöckner collected qualitative date based on surveys and cognitive recognition information and determined that art engaging with environmental issues had strong emotional effects upon its audience. This study was conducted at the 2015 UN Climate Conference, for an exhibition of art pertaining to climate issues. (Art of Change 21 was born at this Climate Conference, and had no role in the display of these artworks; the association was present elsewhere in the conference, though, with similar projects.) Read more »

The Myth of Free Thinking

by Chris Horner

The illusion of rational autonomy

The world is full of people who think that they think for themselves. Free thinkers and sceptics, they imagine themselves as emancipated from imprisoning beliefs. Yet most of what they, and you, know comes not from direct experience or through figuring it out for oneself, but from unknown others. Take science, for instance. What do you think you actually know? That the moon affects the tides? Something about space-time continuum or the exciting stuff about quantum mechanics? Or maybe the research on viruses and vaccines? Chances are whatever you know you have taken on trust – even, or particularly, if you are a reader of popular science books. This also applies to most scientists, since they usually only know what is going on in their own field of research. The range of things we call ‘science’ is simply too vast for anyone to have knowledge in any other way.

We are confronted by a series of fields of research, experimentation and application:  complex and specialised fields that requires years of study and training to fully understand. As individuals, we cannot be experts in all scientific domains, which is why we typically rely on the knowledge and expertise of the scientific community,  composed of experts from various fields who have the necessary background knowledge, experience, and expertise to evaluate scientific theories and data accurately. Read more »

Artificial General What?

by Tim Sommers

One thing that Elon Musk and Bill Gates have in common, besides being two of the five richest people in the world, is that they both believe that there is a very serious risk that an AI more intelligent than them – and, so, more intelligent than you and I, obviously – will one day take over, or destroy, the world. This makes sense because in our society how smart you are is well-known to be the best predictor of how successful and powerful you will become. But, you may have noticed, it’s not only the richest people in the world that worry about an AI apocalypse. One of the “Godfathers of AI,” Geoff Hinton recently said “It’s not inconceivable” that AI will wipe out humanity. In a response linked to by 3 Quarks Daily, Gary Marcus, a neuroscientist and founder of a machine learning company, asked whether the advantages of AI were sufficient for us to accept a 1% chance of extinction. This question struck me as eerily familiar.

Do you remember who offered this advice? “Even if there’s a 1% chance of the unimaginable coming due, act as it is a certainty.”

That would be Dick Cheney as quoted by Ron Suskin in “The One Percent Doctrine.” Many regard this as the line of thinking that led to the Iraq invasion. If anything, that’s an insufficiently cynical interpretation of the motives behind an invasion that killed between three hundred thousand and a million people and found no weapons of mass destruction. But there is a lesson there. Besides the fact that “inconceivable” need not mean 1% – but might mean a one following a googolplex of zeroes [.0….01%] – trying to react to every one-percent probable threat may not be a good idea. Therapists have a word for this. It’s called “catastrophizing.” I know, I know, even if you are catastrophizing, we still might be on the brink of catastrophe. “The decline and fall of everything is our daily dread,” Saul Bellow said. So, let’s look at the basic story that AI doomsayers tell. Read more »

Hallucinating AI: The devil is in the (computational) details

by Robyn Repko Waller

Image by NoName_13 from Pixabay

AI has a proclivity for exaggeration. This hallucination is integral to its success and its danger.

Much digital ink has been spilled and computational resources consumed as of late in the too rapidly advancing capacities of AI.

Large language models like GPT-4 heralded as a welcome shortcut for email, writing, and coding. Worried discussion for the implications for pedagogical assessment — how to codify and detect AI plagiarism. Open-AI image generation to rival celebrated artists and photographers. And what of the convincing deep fakes?

The convenience of using AI to innovate and make efficient our social world and health, from Tiktok to medical diagnosis and treatment. Continued calls, though, for algorithmic fairness in the use of algorithmic decision-making in finance, government, health, security, and hiring.

Newfound friends, therapists, lovers, and enemies of an artificial nature. Both triumphant and terrified exclamations and warnings of sentient, genuinely intelligent AI. Serious widespread calls for a pause in development of these AI systems. And, in reply, reports that such exclamations and calls are overblown: Doesn’t intelligence require experience? Embodiment?

These are fascinating and important matters. Still, I don’t intend to add to the much-warranted shouting. Instead, I want to draw attention to a curious, yet serious, corollary of the use of such AI systems, the emergence of artificial or machine hallucinations. By such hallucinations, folks mean the phenomenon by which AI systems, especially those driven by machine learning, generate factual inaccuracies or create new misleading or irrelevant content. I will focus on one kind of hallucination, the inherent propensity of AI to exaggerate and skew. Read more »

The Great Pretender: AI and the Dark Side of Anthropomorphism

by Brooks Riley

‘Wenn möglich, bitte wenden.’

That was the voice of the other woman in the car, ‘If possible, please turn around.’ She was nowhere to be seen in the BMW I was riding in sometime in the early aughts, but her voice was pleasant—neutral, polite, finely modulated and real.  She was the voice of the navigation system, a precursor of the chatbot—without the chat. You couldn’t talk back to her. All she knew about you was the destination you had typed into the system.

‘Wenn möglich, bitte wenden.’

She always said this when we missed a turn, or an exit. Since we hadn’t followed her suggestion the first time, she asked us again to turn around. There were reasons not to take her advice. If we were on the autobahn, turning around might be deadly. More often, we just wanted her to find a new route to our destination.

The silence after her second directive seemed excessive—long enough for us to get the impression that she, the ‘voice’, was sulking. In reality, the silence covered the period of time the navigation system needed to calculate a new route. But to ears that were attuned to silent treatments speaking volumes, it was as if Frau GPS was mightily miffed that we hadn’t turned around.

Recent encounters with the Bing chatbot have jogged my memory of that time of relative innocence, when a bot conveyed a message, nothing more. And yet, even that simple computational interaction generated a reflex anthropomorphic response, provoked by the use of language, or in the case of the pregnant silence, the prolonged absence of it. Read more »

Not Another Lamb

by Eric Bies

Everyone is talking about artificial intelligence. This is understandable: AI in its current capacity, which we so little understand ourselves, alternately threatens dystopia and promises utopia. We are mostly asking questions. Crucially, we are not asking so much whether the risks outweigh the rewards. That is because the relationship between the first potential and the second is laughably skewed. Most of us are already striving to thrive; whether increasingly superintelligent AI can help us do that is questionable. Whether AI can kill all humans is not.

So laymen like me potter about and conceive, no longer of Kant’s end-in-itself, but of some increasingly possible end-to-it-all. No surprise, then, when notions of the parallel and the alternate grow more and more conspicuous in our cultural moment. We long for other science-fictional universes.

Happily, then, did the news of the FDA’s approval of one company’s proprietary blend of “cultured chicken cell material” greet my wonky eyes last month.

“Too much time and money has been poured into the research and development of plant-based meat,” I thought. “It’s time we focused our attention on meat-based meat.”

When I shared this milestone with my students—most of them high school freshmen—opinions were split. Like AI, lab-grown meat was quick to take on the Janus-faced contour of promise and threat. The regulatory thumbs-up to GOOD Meat’s synthetic chicken breast was, for some students, evidence of our steady march, aided by science, into a sustainable future. For other students, it was yet another chromium-plated creepy-crawly, an omen of more bad to come. Read more »

Sunday, April 16, 2023

The late poet Charles Simic was a chess prodigy

Adrienne Raphel at JSTOR Daily:

Charles Simic, the late, great Serbian-American poet, was born in 1938 in Belgrade, Yugoslavia. In 1941, when Simic was three, Hitler invaded, and a bomb explosion hurled Simic out of bed. Three years later, in 1944, another series of detonations exploded across town—this time, dropped by Allies. Images of a war-torn country inform Simic’s earliest memories. Belgrade had become a chess board, a deadly battleground for both the Nazis and Allies. Simic emigrated as a boy to the United States, and he wrote poetry in English, but the hellish landscape of his youth lay at the heart of his work.

Simic was also a chess prodigy, and the game rewired his brain. As he wrote in the New York Review of Books, “The kinds of poems I write—mostly short and requiring endless tinkering—often recall for me games of chess. They depend for their success on word and image being placed in proper order and their endings must have the inevitability and surprise of an elegantly executed checkmate.”

More here.

By imbuing enormous vectors with semantic meaning, we can get machines to reason more abstractly — and efficiently — than before

Anil Ananthaswamy in Quanta:

The key is that each piece of information, such as the notion of a car, or its make, model or color, or all of it together, is represented as a single entity: a hyperdimensional vector.

A vector is simply an ordered array of numbers. A 3D vector, for example, comprises three numbers: the xy and z coordinates of a point in 3D space. A hyperdimensional vector, or hypervector, could be an array of 10,000 numbers, say, representing a point in 10,000-dimensional space. These mathematical objects and the algebra to manipulate them are flexible and powerful enough to take modern computing beyond some of its current limitations and foster a new approach to artificial intelligence.

“This is the thing that I’ve been most excited about, practically in my entire career,” Olshausen said. To him and many others, hyperdimensional computing promises a new world in which computing is efficient and robust, and machine-made decisions are entirely transparent.

More here.

Toward a Leisure Ethic

Stuart Whatley in The Hedgehog Review:

To most people today, the notion of a leisure ethic will sound foreign, paradoxical, and indeed subversive, even though leisure is still commonly associated with the good life. More than any other society in the past, ours certainly has the technology and the wealth to furnish more people with greater freedom over more of their time. Yet because we lack a shared leisure ethic, we have not availed ourselves of that option. Nor does it occur to us even to demand or strive for such a dispensation.

One reason for this is that the values and culture that created our current abundance may be incompatible with actually enjoying it. Sparta had the same problem.

More here.

Researchers Let 25 AI Bots Loose Inside A Virtual Town and The Results Were Fascinating

Victor Tangermann at Futurism:

The researchers found that their agents could “produce believable individual and emergent social behaviors.” For instance, one agent attempted to throw a Valentine’s Day party by sending out invites and setting a time and place for the party.

A Smallville mayoral race also included the kind of drama you’d expect to occur in a small town.

“To be honest, I don’t like Sam Moore,” an agent called Tom said after being asked what he thought of the mayoral candidate. “I think he’s out of touch with the community and doesn’t have our best interests at heart.”

It got even more human than that.

More here.  Research paper here.