×

Is it possible that Earth could be a captured rogue planet? Why or why not? by Brendinooo in askscience

[–]lmxbftwBlack holes | Binary evolution | Accretion 157 points158 points  (0 children)

No - there are a few things we could expect to see from captured rogue planets which are not true of Earth:

  • Highly eccentric orbits. Earth's orbit is very close to circular, and if it came in from outside something would have to have circularized it, and there's nothing obvious that could or would.
  • A random inclination relative to the rest of the solar system and Earth's spin. Earth's orbital plane closely matches the rest of the solar system, instead of being at something like a 60o angle which is far likelier for a captured object.
  • Being far from the Sun. A rogue planet has to lose some of its energy to stay captured, otherwise it's like a golf ball hitting the cup too fast and popping back out. The closer to the Sun you are, the more energy you need to get rid of to stay there. The Earth is much, much closer to the Sun than you would expect from a rogue planet.
  • Different chemistry. You'd expect a significant difference between the composition of Earth and other planets or asteroids if they didn't form from the same nebula.

I'm sure there are others as well I haven't mentioned.

I know this question might be kind of dumb but, How do they keep the space station cool? by PluralOfPenis-Penai in askscience

[–]Doresoom1 102 points103 points  (0 children)

Former ISS flight controller here.

Not a dumb question at all! There are two internal water cooling loops on the USOS segment, the Medium Temperature Loop (MTL) and the Low Temperature Loop (LTL). These flow through every rack to cool payloads and equipment. They then reject heat to the two external cooling loops that are filled with ammonia.

The ammonia loops reject heat to the radiators, which are the big white panels that are mounted perpendicular to the solar arrays. As you said, no conduction is possible for heat rejection, so it's all rejected via radiation.

Since the ISS rotates the solar arrays for maximum insolation, the radiators always end up with their edge exposed to the sun rather than their faces, which minimizes heat they absorb themselves.

Edit: Here's a NASA/Boeing overview on how it works (PDF): https://www.nasa.gov/pdf/473486main_iss_atcs_overview.pdf

How Does Rain Shadow Work in Cold Climates? by galexical in askscience

[–]CrustalTrudgerTectonics | Structural Geology | Geomorphology 56 points57 points  (0 children)

There are a few requirements for generating a strong rainshadow. Galewsky, 2009 provides a nice overview. In summary, if you have a dominant wind direction of sufficient average velocity oriented perpendicular to the average orientation of a mountain range with sufficient relief (difference in height between the crest and the lowlands), then a rain shadow can develop. This can be expressed with a simple equation Nh/U where h is the relief, U is the average wind velocity, and N is Brunt-Vaisala frequency. If Nh/U is greater than 1, you'd expect a rain shadow to develop, whereas if it's much less than 1, you won't get a rainshadow, which could reflect that either the windspeed is too great, the relief is insufficient, or the Brunt-Vaisala frequency is too low (or some combination thereof). In terms of your hypothetical, this would suggest you want the relief of your mountain range to be sufficient to generate an "orographic barrier", where empirical results suggest that relief of ~1.5 km is sufficient for this purpose (e.g., Bookhagen & Strecker, 2008), but this is not necessarily thinking about variations in N or U from above. Of these, the parameter we'd expect to vary with temperature is N as differences in this will reflect differences in potential temperature, so broadly we'd expect potential temperature to be positively related with average temperature (i.e., lower average temperature will mean lower potential temperature), but Galewsky discusses that even for values of N assuming an average temperature of below freezing, you still can get conditions where Nh/U is greater than 1 (and thus a rain shadow can develop), it will just depend on the other relevant details.

If we assume a climate and relief where (1) conditions exist to generate a rain shadow and (2) where those conditions dictate that most precipitation will fall as snow, this does has some important effects as this implies that the precipitation can be carried further "downstream" (i.e., higher and deeper) into the range than rainfall (e.g., Anders et al., 2008). If we consider examples of locations with either modern or past conditions of relatively cold climates (i.e., a lot of precipitation will fall as snow) and rain shadows, another prominent effect is that the windward side will tend to develop glaciers whereas the leeward side will have much more limited glacial activity. Good natural examples of this are the Olympic Mountains, Patagonian Andes, and Southern Alps in New Zealand.

Finally, most natural examples and experiments considering the development of rain shadows considers linear ranges (though Galewsky, 2009 does briefly consider a circular, volcano like range). In your hypothetical of a "ring" mountain range, presumably one side would be facing the predominant wind direction, so this side would have the most extreme contrast between a clear windward and lee side. You would broadly expect that the sides of the "ring" parallel to the wind dominant direction to have a precipitation gradient both parallel to the wind direction (i.e., it decreases in the direction of the wind) and perpendicular (i.e., it would be dryer on the inside of the ring than the outside). An example of a precipitation gradient along a (linear) mountain that's roughly parallel to the wind direction is the Greater Caucasus, where this also is reflected in a gradient in the amount of glacial activity with more where it is wet and less where it is dry (e.g., Forte et al., 2016). One unknown for your hypothetical would be depending on the size of the ring and other details, whether you'd expect to get some sort of development of weather systems within the basin. I think you'd have to try to model this to really answer that question.

Question from my 9 year old: “when we die, does our microbiome also die?” by lascriptori in askscience

[–]Pacify_via_Cyno 10 points11 points 42 (0 children)

When you die, a large amount of the bacteria in your body is simply not adapted to living outside of you and will slowly perish. However some of the bacteria which inhabit our bodies will survive, and will be responsible for the decomposition process through which your body's molecules will slowly be recycled. If you are buried, some of you will help other organisms grow and thrive, some of you will fertilize soil for plants to grow, some of you will be turned into methane and become part of the atmosphere. Your dying cells will leak carbohydrates, and amino acids, which will cause a flurry of growth in the bacteria that inhabit your body. All the resources your body was using before are now available for the bacteria to use and grow. Your gut bacteria eventually escape the gut as you decompose and will colonize the other circulatory systems like the lymph system and the blood, which are shielded by the immune system when you are alive. Forensic scientists can even study the bacterial signature of the body to determine exactly how long it has been since death.

The microorganisms that inhabit your gut wouldn't survive in your stomach, as it is far too acidic, however once you acquire your gut bacteria, barring taking antibiotics for a long time or serious illness, you will conserve that specific microbiome for most of your life. Not the exact same bacteria of course, but generational descendants of the ones you originally acquired. In fact, the exact biological composition of your gut bacteria is more unique than your fingerprints!. It will change slowly over the course of your life, but that specific combination is your and yours alone, other than special circumstances like fecal transplants.

is a solar wind powered wind turbine possible? by somedudealone in askscience

[–]wwarnout 10 points11 points  (0 children)

Here's a post from another redditor from yesterday:

You can push things with light.

Weirdly enough photons have momentum, when an atom interacts with a photon the photon's momentum is transferred to the atom.

You don't get a lot of push, the equation is thusly: Force = (Power Of Light In Watts)/(Speed Of Light)

Which Means you'll get 1 Newton for every 300 Megawatts of light focused onto a surface.

Which seems like a lot of light power, the cool thing about light is that you can't exactly slow it down. So you can bounce it back and forth until the imperfections of your solar sails shift the photons power and frequency below the reflective floor of your reflectors. I.e. The light gets a little more dim and red every time you bounce it which will eventually make it too dim and red for the reflector material to reflect.

But as long as you're getting rid of the waste heat, with a good reflector you can bounce it a lot. The target is a total reflected power of 30GW. With that you can push a 10kg payload at 10 m/s2. Once you've done this you've built what is in essence a small space elevator made of light. A luminary space dumbwaiter if you will. Payloads can ride the beam up into a geosynchronous orbit and step off to adjust into their parking orbit.

How do you get this much light? On average, between 0.6 and 1 kw of light hits every square meter of earth every second (most concentrated near the equator). With 1000 drones each with a 1m2 reflector/collimator system you can focus roughly 1MW of light. These drones can be stacked into a 5m x 5m storage space on shipping vessels making them deployable 1MW power stations floating on the ocean. Some creative optics can concentrate the collected light into a concentrated beam once enough drones have been accumulated to concentrate the required amount of sunlight.

Once you can put 10kg payloads into geosynchronous orbit, you can start putting drones in orbit around the earth. The light is 30% brighter above the atmosphere, much of this will be absorbed when concentrated on any ground based receivers but would allow for the collimation of high powered transfer orbit lasers for the next phase of the system.

Next we use our light based interplanetary propulsion system to begin putting 50m radius reflectors in a 10 million km orbit of the sun. At this distance each reflector will receive roughly 2.5GW of light which can be concentrated and aimed at a receiver on earth.

About 250 of these reflectors will concentrate enough light to meet the electrical demands of the United States.

About 1000 of these reflectors will concentrate enough light to meet the electrical demands of the entire world.

At 175,000 systems we reach the rank of a Type 1 civilization and we're collecting enough light to terraform planets by concentrating the beam to the diameter of the planet. (Much better than nuking Mars)

Not to mention that we would have a light based railway to move people, power, resources, and data in-between remote planetary bases.

The beam will propel ships at 10 m/s2 to provide artificial gravity and to reduce travel times to realistic time frames. At this constant acceleration you can make a trip from the sun to Pluto in under 2 weeks.

You'll need to strategically place some reflectors to control the vector of the beam to accelerate/decelerate the ship relative to its target, but when the ship is made out of the right reflective material you would have a railway made of light to bounce around the solar system.

The numbers also aren't quite that clean, I'm rounding conservatively for the sake of communication but I can drop how to do the calculations in the comments if anyone is interested.

Question from my 9 year old: “when we die, does our microbiome also die?” by lascriptori in askscience

[–]Allfunandgaymes 52 points53 points  (0 children)

For the first question: it depends. The human microbiome is responsible for much of the decomposition of our bodies after we die - the act of dying doesn't simply hit a kill-switch on all the microbes in and on your body. Depending on where and how you die, some of your microbiome may live on in the environment. For example, if you die in the woods, many of the bacteria and fungi in your microbiome will disperse into the environment after you die and continue on as decomposers. If your remains are collected shortly after you die and are embalmed / entombed, most of your microbiome will indeed perish with you as it runs out of consumable tissue. In either case, there are a good number of human-specific microbes that will die after you do because they can't function anywhere else.

For the second question: the human body begins to be colonized by microbes immediately after being born; however, before and shortly after birth the gut is basically sterile. It takes several months of gastrointestinal activity before an infant's gut becomes strongly colonized by microbes. The gut stays colonized for life, but the gut microbiome changes over an individual's life depending on their diet. The gut microbiome of someone who eats a lot of plant matter and complex carbs looks very different from someone who eats a lot of meat and refined carbs. Different microbes excel at fermenting different nutrients and extracting energy from them. This is why it's recommended dietary changes be made gradually and not drastically. If you suddenly shift from being primarily a meat eater to eating nothing but fruit and salad, you're going to have rather extreme diarrhea as you simply don't have the microbiome equipped to handle all that fiber. It is very possible to alter the makeup of your gut microbiome by changing your diet.

Also, very few microbes inhabit the actual stomach due to its extreme acidity, and many that do are pathogenic. Most of the gastrointestinal microbiome is in the mouth and intestines.

Is it possible for meteor impacts to create volcanic hotpots at antipodes? by Fast-Equivalent-2334 in askscience

[–]OlympusMons94 8 points9 points  (0 children)

Source?

With the arguable exception of Alba Mons, there just isn't evidence of extensive antipodal volcanism on Mars. Maybe there are some small volcanoes antipodal to more modest impacts than Hellas or Argyre, but big picture (e.g., Tharsis, Elysium, Syrtis Major), martian volcanism simply is not dominated by antipodal volcanism. It's Mars, so there going to be a lot of smaller craters approximately antipodal to almost everything, especially anything north of the equator (for which the antipodes are in the more heavily cratered highlands). That would not be good evidence of antipodal volcanism.

I saw a map which perfectly lined up one of the northern volcanoes with Argyre.

There isn't, and the same Williams and Greeley paper that notes Alba Moms as a possible example of antipodal volcanism also claims there isn't. That is, unless you mean Elysium, the northern edge of which is ~10 degrees closer to the equator than the northern edge of Argyre, and ~20 deg compared to the center (and a bit too far east as well). The most obvious feature roughly antipodal to Argyre is another, much smaller crater: Mie. Otherwise, it's just the rather featureless northern plains---and the Viking 2 landing site. It is possible, but as of yet unfalsifiable, that there is a minor, undiscovered volcanic center buried beneath this region. (One would expect anything particularly large to leave a gravity, if not topography, signature (as Tharsis and Elysium do), which would have been mapped by now.)

I don't think timing is a problem. Volcanism could be induced by means of making certain regions of the crust weaker, which would then make them more prone to volcanism later.

I noted that the initiation of Alba Moms may still have still been facilitated by Hellas. But there isn't some pool of magma below the crust waiting to erupt if the crust is cracked a bit. There also needs to be a source of heat, and/or upwelling to drive decompression melting. It is impluasible for an impact to be the proximate cause of eruptions that don't start until hundreds of millions of years later (and then continue for hundreds of millions of years). There would have to be additional mechanisms at play. Perhaps Alba Mons is just older than the available surface ages suggest, and the evidence is buried by younger lava flows. Regardless, Alba Mons would be a singular case, not "every large volcano being antipodal to a large impact". (Antipodal pressures were also ~7x higher for Hellas than Argyre, so the former eventually inducing an antipodal hot spot doesn't mean the latter should as well.)

Now, Williams and Greeley (1994) do note that the fractured crust of Noctis Labyrinthus (not a center of major volcanic activity) is antipodal to Isisis, which produced antipodal pressures intermediate between Argyre and Hellas. As they speculate, the older fractures could have been caused by Isidis. But this fractured region is generally understood to be the product of the stresses produced by the immense load of the Tharsis bulge to its northwest.

Is it possible for meteor impacts to create volcanic hotpots at antipodes? by Fast-Equivalent-2334 in askscience

[–]OlympusMons94 16 points17 points  (0 children)

Alba Mons is approximately antipodal to the center of Hellas, and the Hellas imapct being a possible origin for Alba Mons has been proposed (Williams and Greeley, 1994). Wiliams and Greeley (1994) also note the lack of volcanism antipodal to the (smaller than Hellas) large impacts of Isidis and Argyre. Looking at the antipode of the largest confirmed impact basin on Mars, Utopia, shows a similar lack of major volcanic features.

One should first note that Tharsis is immense, covering a large fraction of Mars' surface, and Mars is covered in impact craters, and has a number of other smaller volcanic regions. (Thus, comparimg antipodal craters to part of Tharsis, or the approximate location of any other volcanic region to an arbitrary crater could easily just be some variant of the Texas sharpshooter fallacy.) Alba Mons itself is geographically on the northeastern periphery of the broader Tharsis region, and it is at best not clear how much its geologic origin is shared with the Tharsis bulge and the other, more centrally located, large volcanoes (Olympus Moms, Tharsis Montes, etc.), which are not remotely close to antipodal to Hellas or any similarly sized basin. After Tharsis, the next largest volcanic region is Elysium, which is not antipodal in any sense to a large impact basin.

Timing is also a problem. The Hellas basin is ~3.8-4.1 billion years old (i.e., in the Noachian period). The Tharsis bulge itself (which can hardly be described as antipodal to Hellas in any meaningful sense) is of comparable or slightly younger age (largely in place by the late Noachian, ~3.7-3.8 billion years ago), but the large volcanoes that formed on top of it are younger (Hesperian to Amazonian periods, i.e. the past 3.7 billion years). Indeed, some volcanic activity has continued in Tharsis (and Elysium) for billions of years, and into geologically recent times--to a very limited extent, as recently as within the past few million years. Age is also problematic for merely associsting Albas Mons in particular with Hellas. Age estimates are around 3.5-3.8 billion years (i.e., Hesperian period). Now, age estimates for surfaces on Mars are only a rough estimate from crater counts, but Alba Mons appears to be significantly younger than Hellas. Alsp, while volcanic activity at Alba Mons ended much earlier than in the overall Tharsis regions (and in the separate Elysium/Cerberus region...), it still went on for a few hundred million years--and there may have been dike emplacements (i.e., magma, but not erupting) within the past 0.5-1 billion years). Therefore, Alba Mons is not purely the product of a singular event like an impact--even in the possible, if unlikely, event that did initiate it.

All of that said, it is increasingly accepted that the lowlands that cover the northern hemisphere of Mars, and thud the Martian dichotomy are the product of a single impact much larger than Hellas or Utopia. Many have also proposed that the dichotomy (over which Tharsis is approximately centered) is ultimately responsible for the development of Tharsis. But that is a whole other discussion and hypothesis (or set of hypotheses) unrelated to antipodal volcanism.

Is it possible for meteor impacts to create volcanic hotpots at antipodes? by Fast-Equivalent-2334 in askscience

[–]Microflunkie 81 points82 points  (0 children)

Magnificent and concise answer; a pleasure to read. Your correct use of both e.g. and i.e. was sublime. Add to that your citations for each point elevates this reply, in my opinion, to the highest caliber and quality.

Is it possible for meteor impacts to create volcanic hotpots at antipodes? by Fast-Equivalent-2334 in askscience

[–]CrustalTrudgerTectonics | Structural Geology | Geomorphology 284 points285 points  (0 children)

Plagiarizing a bit from previous answers I wrote - there's generally a consensus (and demonstrable evidence) that impacts cause melting and that sufficiently large impacts can produce similar volumes of melt to Large Igneous Provinces (e.g., Elkins-Tanton & Hager, 2005) and further suggestions that they might be able to initiate some amount of prolonged melting after the impact that would look similar to a plume (e.g., Jones et al., 2002, Jones, 2005). Critically though, these are arguing for melting / LIP formation colocated with the crater (and basically obliterating/filling the crater), not antipodal or even significantly displaced from the impact site. By the time you're at an impact with sufficient kinetic energy to induce melting on the other side of the planet, you're basically at the scale of the moon forming impact where the entire crust and large portions of the mantle of the Earth were melted (i.e., to induce melting on the other side of the planet from the impact, you'd basically need to induce melting everywhere between the impact and the antipode). Following that, it's also worth noting that the ability to generate large volumes of melt depends a lot on the details of both the impactor (mainly its kinetic energy, so a mixture of its size, density, and velocity), but also the target (e.g., lithosphere thickness, mantle temperature, etc) and generating large volumes of melt is not common (e.g., Ivanov & Melosh, 2003). For example, the Elkins-Tanton & Hager paper highlights that large volume of impact melting was likely more common earlier in Earth history both when impacts (some very large) were more common, but also the Earth was generally hotter and the vast majority of LIPs that are preserved today are not explainable with impacts.

I'm going to guess this question is inspired by the oft (misunderstood) hypothesis that the Chicxulub impact influenced the Deccan Traps eruption? With reference to this, the beginning of the Deccan Traps eruption cleary predates the impact by ~250,000 years (e.g., Schoene et al., 2014) so the impact was not the cause of the Deccan Traps (or the plume that generated the Deccan Traps). There has been the suggestion that the impact may have led to a temporary increase in its eruptive activity (e.g., Renne et al., 2013), but the evidence for this remains a bit enigmatic and depends on the precise timing of the particular eruptive phase relative to the impact. In terms of these details, there are some challenges related to mixing of different type of geochronometers (e.g., Schoene et al., 2021) that make conclusively rejecting or accepting the hypothesis of a trigger of an eruptive pulse in the Deccan Traps by the impact pretty hard.

Is there any suspected link between the ~50,000 year old meteor impact craters on Earth? by orulz in askscience

[–]CrustalTrudgerTectonics | Structural Geology | Geomorphology 833 points834 points 2 (0 children)

So first off, let's be precise with the ages.

For Barringer crater, there are three different age estimates from three independent cosmogenic exposure ages, 49.7 ± 0.85 ka (thousand years), 49 ± 3 ka (Phillips et al., 1991), and 49.2 ± 1.7 ka (Nishiizumi et al., 1991) - which all overlap within uncertainty. However, these ages were all done in the pretty early days of surface exposure age dating and we've significantly refined the production rate estimations for these cosmogenic isotopes. Revisiting this data and applying more accurate production rates suggests the age of the crater is 61.1 ± 4.8 ka (Barrows et al., 2019).

For Xiuyan crater, dating is spotty. Liu et al., 2013 could only constrain that it was older than 50,000 years from radiocarbon (i.e., it was beyond the functional limit of radiocarbon dating). The upper limit of the age is really poorly constrained but broadly considered to be ~5 million years (e.g., Indu et al., 2022).

For the Yilan crater, radiocarbon dates suggest an age of 49.3 ± 3.2 ka (Chen et al., 2021).

Finally, the Lonar crater is really all over the place. Radiocarbon dates suggested a wide range of 1.79 ± 0.04–40.8 ± 1.1 ka (Maloof et al., 2010), fission track produced a pretty ugly estimate of 15 ± 13 ka (Storzer & Koeberl, 2004), thermoluminescence dating of impact glass suggested 52 ± 6 ka (Sengupta et al., 1997), a Ar-Ar date suggested a much older age of 570 ± 47 ka (Jourdan et al., 2011), and a combination of cosmogenic exposure and radiocarbon suggests 37.5 ± 5.0 ka (Nakamura et al., 2014). Suffice to say, it's not clear which one of these dates is correct (though Nakamura have a reasonable explanation to rule out the anomalously old Ar-Ar date), but the majority of these do not overlap with the ~50 ka range of interest or more importantly with the ages of any of the other craters we're talking about.

In summary, of the four craters mentioned only really Barringer and Yilan have similar ages and only if we use the original (incorrect) ages of Barringer. Ultimately, none of the ages of these craters overlap within uncertainty. I'll also just point out a good resource in the form of a compilation of terrestrial impact ages from Osinki et al., 2022 and a corresponding website.

AskScience AMA Series: Have we entered a new geological era? We're climate experts, who've been investigating Crawford Lake, a potential mark for the beginning of the Anthropocene. Ask us anything! by AskScienceModeratorMod Bot in askscience

[–]ManicMonkOnMac 13 points14 points  (0 children)

I struggle deeply with the current state of affairs wrt climate change, thanks for sharing your coping strategies, I began gardening and it really helps alleviate the climate anxiety.

How were free neutrons first created en mass? by Crayola_Chomper in askscience

[–]dragmehomenow 25 points26 points  (0 children)

The specific reaction used back in the day effectively amounts to mixing polonium and beryllium and hoping for the best. Polonium-210 decays to lead-206 and produces an alpha particle, which is absorbed by beryllium-9 to produce carbon-12 and a neutron. This is separated by a thin film of metal (usually aluminum) before the warhead initiates.

Bringing the polonium and beryllium together is typically done by sandwiching them somewhere inside the warhead, and relying on the implosion of the warhead to mix it. The main problem is that polonium-210 has a half-life of around 138 days, so it has to be replaced often. This wasn't a problem in WWII, given that they were basically sending them straight to the Pacific once they were done, but this was more of a problem in the Cold War.

More recently, nukes have used a compact particle accelerator to generate neutrons through spallation, but my favorite form of neutron-related nuclear shenanigans involves fusion boosting. Nuclear fusion typically involves the deuterium-tritium reaction, where deuterium and tritium fuse to form helium and a really high-energy neutron. Being high-energy, this increases the probability that the neutron will collide with something and cause fusion. As before, bringing deuterium and tritium in an implosion warhead is pretty straightforward. If you place them together in the core, the nuclear initiation will typically produce the necessary temperatures and pressures. Even if it occurs for a few nanoseconds, enough neutrons are produced to kick the fission reaction and increase the amount of energy produced before the core disassembles itself violently.

Anyway, the real question is: Why do you need neutron initiation? To answer this, we have to look at the difference between gun-type and implosion-type warheads. Gun-type weapons slam two subcritical cores together to form a supercritical core, but there's always the small chance that during the collision process, the nuclear fission reaction might go off early. This wasn't a problem for enriched uranium, but plutonium has a nasty isotope plutonium-240 that tends to trigger this.

(Why is an early reaction bad? Since nuclear bombs release energy exponentially, you only have a limited amount of time before the warhead violently disassembles itself. Since you can't un-start the process once it begins, you're incentivized to delay it for as long as possible. Early initiations cause fizzles, which can range from "we only got 5 kilotons instead of 50 kilotons" to LLNL's embarrassing failure to even destroy the scaffolding holding one of their first warheads.

The solution was implosion-type warheads, which effectively consist of a supercritical mass of a fissile material machined with large voids so that it doesn't go supercritical until you collapse it into a sufficiently small volume. For comparison, insertion times range from 1 to 5 microseconds in implosion-type warheads and 1 to 3 milliseconds (or 1,000 to 3,000 microseconds) in gun-type warheads.

Reducing insertion times to microseconds grants you finer control over the initiation process. By injecting a large number of neutrons into the supercritical mass, you now control when the mass starts to release energy exponentially, as opposed to relying on the statistical likelihood of atoms randomly splitting.

(Places to learn more: Nuclear Weapon Archive is an old-school website, but it's been a stellar ELI20 guide to nuclear weapon physics. If you have access to a library or Libgen, Bodansky's Nuclear Energy: Principles, Practices, and Prospects is a classic and one of the core readings in a graduate-level course I took on the science of nuclear weapons in a political context.)

Why was "making heavy water" a mistake? by Xyrula in askscience

[–]djublonskopf 3527 points3528 points  (0 children)

I haven’t seen the movie, so hopefully I’m not missing some key piece of narrative context here. But there’s nothing inherently wrong with using heavy water as a moderator for plutonium production. There are heavy water reactors in use today, and they work just fine.

But in, say, 1940, research into nuclear reactors was in its early stages, and scientists were still trying to figure out what materials would even work and what wouldn’t. Graphite (relatively cheap and easy to come by) was tested, but the French and Germans both found that graphite didn’t work in practice…it absorbed too many neutrons and “poisoned” the reaction. Heavy water worked, but it was very expensive and very hard to procure, so building working reactor piles with it would be slow and costly…but since graphite was out, the Germans resigned themselves to heavy water.

The American team figured out that the only problem with graphite was impurities…trace amounts of boron in “usual” graphite was responsible for the unwanted neutron absorption. By this point, however, the USA had put a stop to any new publication of nuclear scientific research, so there was no paper published that the German scientists could learn from. The Americans were able to produce purer graphite, still much cheaper and faster than acquiring heavy water, and so the Americans were able to set up nuclear piles and start experimenting with material while the German scientists were still struggling to pull together enough heavy water to get going.

So, the “mistake” would have been not realizing on their own that graphite impurities were the issue, a mistake that (among many other hindrances) set the Germans even further behind the Manhattan project team.

Is the concept of alpha males among animal species scientific? by ScrollForMore in askscience

[–]CleisthekneesEvolutionary Theory | Paleoanthropology 67 points68 points  (0 children)

Unequivocably yes. I'm guessing your question arises from the popular social pushback againt the notion of male dominance hierarchies which arose in opposition to the abuse of the phrase "alpha male" among people like podcasters, self-help gurus, the pickup artists of the 2010's, etc. Despite those people lacking almost anything remotely scientific in their collective spiels, the existance of male dominance hierarchies across many animal species is virtually undisputed, and much of the research on this realm involves great apes because they've been the subject of some of the most detailed field work in the last century since formal ethology (the study of animal behavior) came into its own.

In chimpanzee troops, for example, the role of alpha male is very well-defined, and we have documented extremely complex landscapes of social behaviors and manipulation related to this status. My fellow Jane Goodall fans will likely recognize the name Frodo, an extremely large male chimpanzee and all-around bastard from the Kasakela troop (the troop which annihilated the rival Kahamaa troop in the famous "Gombe chimpanzee war") which was the focus of Goodall's field work in the 1960's. He was second in rank only to his brother Freud, whom he attacked and defeated whilst the latter was suffering from a parasitic illness, though also unexpectedly attacked other high-ranking males when they went after Freud following his defeat, which is a very typical occurrence during these coups. After this, Frodo basically terrorized the rest of the troop into submission for the decades he remained in power, across which his grooming ratios, his sharing of food (or lack thereof), his reproductive behaviors, his allowance of other rivalries, etc all bore extreme differences from the other males in the troop, and the overall picture we now have of alpha male behavior in chimpanzees is substantially informed from Goodall's extensive fieldwork on Frodo. Similar well-defined trends of behavior in dominant males exist in gorillas and baboons, and less so for orangutans (which are simply far more solitary than the rest of the great apes), and bonobos, whose hierarchy is variable across groups and generally not that restrictive of individual behavior. With regards to humans, because we've had such extremely complex social behavior for so long, it really doesn't do to draw direct comparisons between us and other great apes, let alone other animals outside our order. The idea of an alpha male lone wolf in particular is laughable, since if you see a "lone wolf", the only thing you should really suspect is that something is very wrong with it which has made it unable to thrive in its social environment. Another area of misconception which has prompted the throwing out of the baby with the bathwater in this scenario is the idea that alpha males have a better life than anyone else, which could not be further from the truth in primatology. Dominant male gorillas are under constant threat from typically younger males, who will attack and kill the dominant male's offspring if they can. Curiously, the female whose offspring was just slaughtered will almost always go off and mate with the attacking male, and the behavioral signalling here is debated and not well understood. Likewise in chimpanzees, dominant males are constantly facing unpredictable intragroup alliances to depose him, and being a kind and benevolent dictator (the antithesis of a Frodo) doesn't really seem to help all that much, nor does it prevent the incredible brutalization that virtually always occurs to deposed leaders. Much of this violence seems directed at the genitals, as deposed males almost always have severe genital injuries. When tranquilizing dominant male baboons for study, for example, rival male onlookers routinely try to attack the tranquilized male's genitals while he's incapacitated. On the reproductive front, female chimpanzees are so promiscuous that contemporary genetic analysis has shown that often as many as half of the infants in a given troop are not sired by any high-ranking males in that troop. Adult females will generally mate with almost every mature male in the troop during each fertility cycle, in addition to leaving the troop borders to mate with males of other troops. In this way they contrast substantially with gorillas, in that chimpanzee males will fiercely defend any infants in their own troop, despite generally having no idea of the infant's paternity, while non-dominant male gorillas can be almost certain that the infants in a group belong to the dominant male, and are therefore a great target for infanticide. All in all, if you're a male great ape looking to have a nice life, being an alpha male is really not what you'd shoot for.

If you're a human? Way more complex question. It is without question that virtually all human social groups have one or a few males at the top of their social hierarchies, but the restriction on behavior for the other males in the group varies so much that I can't even begin to explain it here. So many traditional societies were in fact so focused on maintaining their population that often there wasn't even that much of a reproductive limitation for non-dominant males, they just wanted all fertile women to be reproducing as much as possible, and in these cultures you see huge importance placed on female fertility and "paternity" is mostly a socially constructed thing. This is very typical of premodern sub-Saharan Africa, where you would (and still do, to some extent) routinely see the offspring of female infidelity counted right alongside their half-siblings in the familial context, and in which intragroup violence is often suppressed in favor of violence directed at the generally unrelated males in other groups. In contrast you have later, mostly Eurasian agrarian societal periods where limiting population growth was of utmost importance, and among these you have severe restrictions on sexual behavior, vast amounts of infanticide and violence towards women, routine intragroup violence, etc. All of these contribute to the dynamic of dominance hierarchies in humans, and there is really no one picture that adequately characterizes them all, much less that there is a typical "alpha male" role among human groups. You can reasonably say that there are almost always male(s) at the top of social hierarchies, but their behavior is almost never this harem-having, Conan the Barbarian fantasy where they reproductively monopolize all the females. Polygamy is of course very real, and women across many premodern societies found it to be a very beneficial arrangement in terms of resources, but this doesn't really orient around some inherent quality of the male which makes him "the alpha", because it's most likely he was simply the oldest surviving son of some other male who already had resources. Fast forward to today, and a huge amount of economically "high-status" males (take this phrase with a absolute boulder of salt) pay tons of money to have their vas deferens blocked off by a laser so they can have more non-reproductive sex. So the people (read: podcast bros) who continue to suggest these weird, Freudian, reproductively-oriented bases for all male behavior and hierarchies are really just off the reservation. Why would infanticide and inter-partner violence towards women be perhaps the most common behaviors across all historical human cultures if it was all about having more fertility?

The reasons underlying all these behavioral trends even outside humans are very complex and lots of ambiguous evidence exists, but to say they simply don't exist because some cringe-inducing podcasters abuse the phrase to attain influence is ridiculous.

In lieu of listing 50 citations for everything here, if anyone wants further reading on something please just specify and I'll get it to you.

Edit: some citations from a variety of sources, included peer-reviewed journals, and interview with one of the most well-known primatologists in the world, and the Goodall Institue. Sorry if the formatting is bonked. Can’t use Apollo anymore so I’m on the mobile site.

From the American Journal of Primatology in 2009:

“Alpha chimpanzee grooming patterns: implications for dominance "style" https://pubmed.ncbi.nlm.nih.gov/19025996/

From Cell, 2021:

“Alphas, but not others with high rank, sired a disproportionate share of offspring“

https://www.cell.com/iscience/fulltext/S2589-0042%2821%2900832-4

From a lay publication interview with Frans der Wall, the director of Emory University’s very famous primatology research center.

https://www.independent.co.uk/news/world/europe/alpha-male-alpha-chimpanzee-primatologist-frans-de-waal-a8421291.html

The Goodall Institute has a good public-targeted overview as well:

“Let’s start at the top: The highest-ranking chimpanzee in a group is the alpha-male...”

https://news.janegoodall.org/2018/07/10/top-bottom-chimpanzee-social-hierarchy-amazing/

Peak power production in the Sun has been compared to the volumetric heats generated in an active compost heap. At the center of the Sun, theoretical models estimate it to be approximately 276.5 watts/m3. How can a fusion torus generate a 50,000 times energy than the sun, pound for pound? by MegavirusOfDoom in askscience

[–]AlericandAmadeus 322 points323 points  (0 children)

To expand on this - the sun benefits from being absolutely MASSIVE (both meanings of the word)

1.) There is a ton more fusionable material in the sun due to its enormous size, so it produces gargantuan amounts of energy even with a slower fusion process.

2.) Because the sun is so massive, everywhere that fusion could be taking place in the sun is a relatively huge area under insane amounts of pressure - so the the overall likelihood that particles are going to collide and fuse in the center of the sun is much higher, even with lower temperatures and the chance for any particular one atom being lower. There’s just billions of times more stuff all under insane pressure and still insane heat. Law of averages means you’re still gonna get a lot more fusion.

You can’t build a fusion device that’s analogous to the sun because a key part of the way the sun works is by being AS BIG AS THE SUN. We have to use other materials/methods/temps that are a lot more reactive and a lot more focused.

Super simple way of putting it — The more mass you have, the less everything else you need to attain the same result of nuclear fusion - there is a tipping point where having enough mass creates conditions conducive to fusion. The less mass you have (I.e. below the tipping point), the more of everything else (temp, more reactive material, etc….) you need — you don’t have that level of mass so you have to crank up the other parts of the equation to achieve the same result.

Modern fusion devices are “more efficient pound for pound” because they operate differently than the sun. We have to optimize our limited resources (read: D-T fusion) and crank up the temp (read: kinetic energy of particles) to overcome the forces that normally repel atomic cores from smacking into each other to achieve even a little fusion, whereas the sun doesn’t really have to care - it guarantees large quantities of fusion just by being the sun (an absolutely massive ball of hydrogen, helium, and trace heavier elements). And it can do this on its own for billions of years simply by existing as it does - the mass/pressure keeps the reaction going even if it isn’t the most efficient fusion reaction possible.

Edit: added a few sentences at the end, and wanted to stress again that this is an oversimplification of things, but it’s a good way to understand why you might see #s like OP’s question that seem counterintuitive.

TL;DR - The sun works on a “quantity over quality” principle cuz it has the quantity. human fusion devices have to be relatively all “quality” cuz we don’t have billions of tons of fusionable material just laying around.

Sun exposure - 30 minutes straight under the sun versus 10 minutes under the sun but done 3 times. What is the difference, if there is one? by livelaughloaft in askscience

[–]kagamiseki 1038 points1039 points  (0 children)

Spread out exposures are better.

DNA damage is constantly being repaired.

We have multiple mechanisms to repair this damage. For illustration, let's say you have 3 repair mechanisms, and it takes 10 minutes to repair damage to any one of them.

Scenario 1.

You spend 10 mins in the sun, which damages repair mechanism A. You stop under a tree for 10 minutes. Mechanisms B and C fix mechanism A. You continue, until you've been out for a total of 2 hours. End of the day, the mechanisms managed to repair each other, because there was always a backup repair mechanism.

Scenario 2.

You spend 2 hours straight in the sun. All 3 repair mechanisms have been damaged by the UV radiation. You don't have any more repair mechanisms left, but the body still remembers to kill off these damaged cells, resulting in a nasty sloughing sunburn.

Scenario 3.

You spend 2 hours straight in the sun. All 3 repair mechanisms are damaged. The cell-suicide (apoptosis) mechanism is also damaged. Some damaged cells survive and become cancer.

This is the multiple-hit hypothesis of cancer. You can take damage in many places, but if you accumulate damage to all of the repair mechanisms simultaneously, that's when cancer happens. I can't say how much of a difference a couple minutes of breaks will make, but it will probably be better than taking all of the damage continuously.

Think, what's worse, 8 drinks in 3 hours, or 8 drinks in 30 minutes? Total amount of alcohol is the same, but there's a difference.

Edit to add: Way way better than spreading out exposure, is to avoid UV exposure in the first place. Not all DNA damage can be repaired. Exposure to UV radiation always increases your cancer risk. Wear sunscreen or other types of UV mitigation.

Aspartame made from E. Coli waste? Is this as bad as it sounds? by defaultpath in askscience

[–]defaultpath[S] 50 points51 points  (0 children)

Exactly what I was looking for. Thanks for putting that in perspective :)

If mRNA vaccines train your body to fight diseases, can it train your body to not fight certain things? by failstoomuch in askscience

[–]iayorkVirology | Immunology 775 points776 points  (0 children)

Yes, this is an active area of research and there are several promising studies.

Without going into details, and oversimplifying wildly, the goal of allergy vaccines is to change the nature of an immune response to suppressive rather than inflammatory. (It’s often not understood that active immunity can be suppressive, turning off inflammatory responses. There are large and highly active branches of the immune system devoted to this active suppression, which can be antigen-specific and non specific.)

Accordingly, in principle it’s possible to use a vaccine, delivered in the right way, to turn on the suppressive response to a particular antigen, which should shut off allergies. It’s tricky, because of course you want to avoid amplifying the ongoing inflammatory response, but it can be done.

Since these are essentially vaccines, mRNA and DNA can be used to deliver them. For example:

Allergen‐specific immunotherapy, which is performed by subcutaneous injection or sublingual application of allergen extracts, represents an effective treatment against type I allergic diseases. … Plasmid DNA and mRNA vaccines encoding allergens have been shown to induce T helper 1 as well as T regulatory responses, which modulate or counteract allergic T helper 2–biased reactions. … Due to inherent safety features, mRNA vaccines could be the candidates of choice for preventive allergy immunizations. The subtle priming of T helper 1 immunity induced by this vaccine type closely resembles responses of non‐allergic individuals and—by boosting via natural allergen exposure—could suffice for long‐term protection from type I allergy.

DNA and mRNA vaccination against allergies

It can also be done as a preventive, in say children with the genetic tendency to develop allergies:

Our data clearly indicate that mRNA vaccination against Phl p 5 induces robust, long-lived memory responses, which can be recalled by allergen exposure without side effects. mRNA vaccines fulfill the requirements for safe prophylactic vaccination without the need for booster immunizations.

Prophylactic mRNA Vaccination against Allergy Confers Long-Term Memory Responses and Persistent Protection in Mice

A more recent paper on the same principle:

We developed a liver-targeting lipid nanoparticle (LNP) platform to deliver mRNA-encoded peanut allergen epitopes to liver sinusoidal endothelial cells (LSECs), which function as robust tolerogenic antigen-presenting cells that induce FoxP3+ regulatory T-cells (Tregs). … These results demonstrate an exciting application of mRNA/LNP for treatment of food allergen anaphylaxis, with the promise to be widely applicable to the allergy field.

Use of a Liver-Targeting Immune-Tolerogenic mRNA Lipid Nanoparticle Platform to Treat Peanut-Induced Anaphylaxis by Single- and Multiple-Epitope Nucleotide Sequence Delivery

Why did only 1g of the Hiroshima bomb go through fission? by BritishBacon98 in askscience

[–]RobusEtCeleritasNuclear Physics 15 points16 points  (0 children)

Early designs like those of the Manhattan Project were not very efficient at using up their fuel.

You can estimate how much fuel actually reacted by taking the total explosive yield of the weapon and dividing by the average fission Q-value (assuming fission is the only reaction contributing to the yield), and that gives you the number of fission reactions that happened. Each fission reaction involves one atom of fuel, so that tells you how many atoms of fuel reacted, which is trivial to convert to a mass.