δ converted query to cloudsearch syntax: (or (field flair 'ama3') (field flair 'dis'))
search
Hi Reddit,
We’re all coming back from OpenCon, an annual conference that convenes some of the most innovative ECRs to advocate for greater openness and transparency in research and publishing. We’re here today to share some of the keynotes from this year’s conference and to answer your questions about all things open. Here’s a little bit more about us, and our individual projects:
I'm Achintya Rao, a PhD candidate in science communications at the University of the West of England. I work at CERN as a science communicator for the CMS Collaboration, and have recently been contributing to the development of the CERN Open Data Portal. I also wrote the PhD Starter Kit, a guide for early career researchers on how to use different tools to keep data and research more open.
I'm Brewster Kahle, the Founder and Digital Librarian of the Internet Archive. The goal is Universal Access to All Knowledge.
I'm Heather Piwowar and I am a cofounder of Impactstory. A service that makes it easy, fun, and free to learn about the online impact of your research, and making a profile is free.
We are a non-profit founded by the Sloan Foundation and the NSF, and our mission is to motivate researchers to make their work open and accessible.
I'm Soazic Elise WANG SONNE and I am a PhD fellow in economics of innovation and technological change at the United Nations University in Maastricht. I am also serving as a catalyst for the Berkeley Initiative on Transparency in Social Sciences (BITSS) of the Center for Effective Global Action (CEGA) for UC Berkeley. Part of my job as a catalyst is to foster the adoption of transparency and open access practices in social sciences research through advocacy trainings and workshops. So far, my regions of focus are Europe (UK) and Sub Saharan Africa (Cameroon and South Africa).
I am currently working on the possibility of implementing a randomized control trial to better understand what works to enhance open research practices in French-speaking Sub Saharan African countries. I am also involved on a project aiming at suggesting and constructing an alternative metric to assess academic publication impact in Francophone Sub Saharan Africa taking stock on the Open Peer Review.
Follow our projects on Twitter at @internetarchive and @impactstory. Follow OpenCon on Twitter @open_con and the discussion from this year's conference at #OpenCon.
We will be answering your questions at 1pm ET -- Ask Us Anything!
The /r/science discussion series is a series of posts by the moderators of /r/science to explain commonly confused and misunderstood topics in science. This particular post was written by myself and /u/fsmpastafarian. Please feel free to ask questions below.
A cornerstone of scientific study is the ability to accurately define and measure that which we study. Some quintessential examples of this are measuring bacterial colonies in petri dishes, or the growth of plants in centimeters. However, when dealing with humans, this concept of measurement poses several unique challenges. An excellent illustration of this is human emotion. If you tell me that your feeling of sadness is a 7/10, how do I know that it’s the same as my 7/10? How do we know that my feeling of sadness is even the same as your feeling of sadness? Does it matter? Are you going to be honest when you say that your sadness is a 7? Perhaps you’re worried about how I’ll see you. Maybe you don’t realize how sad you are right now. So if we can’t put sadness in a petri dish, how can we say anything scientifically meaningful about what it means to be sad?
Subjective experience is worthy of study
To start, it’s worth pointing out that overcoming this innate messiness is a worthwhile endeavor. If we put sadness in the “too hard” basket, we can’t diagnose, study, understand, or treat depression. Moreover, if we ignore subjective experience, we lose the ability to talk about most of what it means to be human. Yet we know that, on average, people who experience sadness describe it in similar ways. They become sad as a response to similar things and the feeling tends to go away over time. So while we may never find a “sadness neurochemical” or “sadness part of the brain”, the empirically consistent structure of sadness is still measurable. In psychology we call this sort of measure a construct. A construct simply means anything you have to measure indirectly. You can’t count happiness in a petri dish so any measure of it will have a level of abstraction and is therefore termed a construct. Of course, constructs aren’t exclusive to psychology. You can’t put a taxonomy of a species in a petri dish, physically measuring a black hole can be tricky, and the concept of illness is entirely a construct.
How do we study constructs?
To start, the key to any good construct is an operationalized definition. For the rest of this piece we will use depression as our example. Clinically, we operationalize depression as a series of symptoms and experiences, including depressed mood, lack of interest in previously enjoyed activities, change in appetite, physically moving slower (“psychomotor slowing”), and thoughts of suicide and death. Importantly, and true to the idea of a consistent construct, this list wasn’t developed on a whim. Empirical evidence has shown that this particular group of symptoms shows a relatively consistent structure in terms of prognosis and treatment.
As you can see from this list, there are several different methods we could use to measure depression. Self-report of symptoms like mood and changes in appetite are one method. Third party observations (e.g., from family or other loved ones) of symptoms like psychomotor slowing are another method. We can also measure behaviors, such as time spent in bed, frequency of crying spells, frequency of psychiatric hospital admissions, or suicide attempts. Each of these measurements are different ways of tapping into the core of the construct of depression.
Creating objective measures
Another key element of studying constructs is creating objective measures. Depression itself may be reliant in part on subjective criteria, but for us to study it empirically we need objective definitions. Using the criteria above, there have been several attempts to create questionnaires to objectively define who is and isn’t depressed.
In creating an objective measure, there are a few things to look for. The first is construct validity. That is, does the measure actually test what it says it’s testing? There’s no use having a depression questionnaire that is asking about eating disorders. The second criteria we use to find a good measure is convergent validity. Convergent validity means that the measure relates to other measures that we know are related. For example, we would expect a depression scale to positively correlate with an anxiety scale and negatively correlate with a subjective well-being scale. Finally, a good measure has a high level of test-retest reliability. That is, if you’re depressed and take a depression questionnaire one day, your score should be similar (barring large life changes) a week later.
That all still sounds really messy
Unfortunately, humans just are messy. It would be really convenient if there were some objective and easy way to measure depression but an imperfect measure is better than no measure. This is why you tend to get smaller effect sizes (the strength of a relationship or difference between two or more measured things) and more error (the statistical sense of the word - unmeasured variance) in studies that involve humans. Importantly, that’s true for virtually anything you study in humans including all sorts of things we see as more reliable like medicine or neuroscience (see Meyer et al., 2001).
Putting it all together (aka the tl;dr)
What becomes clear from our depression example is just how complex developing and using constructs can be. However, this complexity doesn’t make the concept less worthy of study, nor less scientific. It can be messy but all sciences have their built in messiness, this is just psychology’s. While constructs such as depression may not be as objective as bacterial growth in a petri dish or the height or a plant, we use a range of techniques to ensure that they are as objective as possible but no study, measure, technique or theory in any field of science is ever perfect. But the process of science isn’t about perfection, it’s about defining and measuring as objectively as possible to allow us to better understand important aspects of the world, including the subjective experience of humans.
Kite Pharma is a biotech company that manufactures CAR-T cells. Essentially, CAR-T cells are T-cells taken from a patient, engineered to recognize and destroy the patient's tumor, and then put back in the body to kill the cancer cells.
The CAR-T concept has been exciting researchers for several years now, but clinical studies were typically small and mostly focused on testing the safety of the technology. Last night, KITE Pharma released new data from their ongoing pivotal (meaning intended to be used to apply for FDA approval) phase II study using CAR-T cells in Non-Hodgkin Lymphoma. The results were very impressive. KTE-C19 (the CAR-T drug) met the primary endpoint of objective response rate (ORR), p < 0.0001, with an ORR of 76 percent, including 47 percent complete remissions (CR). Historically, standard of care has an 8% CR rate for these patients.
While very exciting, there are still several concerns with the technology: namely safety, and duration of remission.
A number of patients experienced adverse events related to the drug, and two died as a result of treatment. Additionally, while 47% of patients experienced a complete remission, some had relapsed three months later.
This is part of the Science Discussion Series, so I will try to check in intermittently during the day to help discuss this clinical trial, CAR-T cells and other cool technologies in the immunotherapy space.
On Tuesday, July 14, New Horizons (website, Wikipedia page) will pass by Pluto. Pluto is one of the largest members of the Kuiper Belt. Kuiper Belt objects (KBOs) are small bodies made up of rock and ice, with orbits predominantly outside of Neptune's orbit (to be precise, they have semi-major axes larger than Neptune's). In advance of New Horizons' flyby of Pluto, I thought I'd post a science discussion to talk about what we already know about Pluto and why it is an interesting/important thing to study. I'm not on the mission team, but I'm generally knowledgeable about Pluto.
History:
In 1846, Neptune was found based on predictions from Uranus' orbit not behaving like it should given the masses and locations of the other known planets. After following Neptune's orbital motion, and continuing to follow that of Uranus, something still didn't seem quite right and an additional planet was posited. Thus, when Pluto was first found in 1930, astronomers thought it was very massive (like the gas giants), massive enough to significantly perturb the orbit of Uranus and Neptune. What was really going on was that we didn't know the mass of Neptune very well. Once Voyager 2 flew by Neptune, it was clear that perturbations on Uranus' and Neptune's orbits could be entirely explained without a massive Pluto. In the meantime, Pluto was considered a 'planet'.
In 1992 and 1992 QB1 is found. QB1 is smaller than Pluto (based on the fact that it is dim, being small means it doesn't have a lot of surface area to reflect much light), but it also orbits in the just-beyond-Neptune region of the solar system. Today, we know of many many such objects and call this population the Kuiper Belt. It is clear that Pluto is a member of this population. With the discovery of Eris, which is likely larger than Pluto, it was clear that either Pluto should not be considered planet, or that Eris and others should also be called planets.
A similar thing happened to Ceres (which is currently being visited by Dawn) and other asteroids after they were first discovered. Here's a page from the 1849 edition of Popular Science Monthly on the discovery of Planet Hygea. It mentions the 18 planets known at the time. Once it was clear there was a large population of smaller things orbiting between Mars and Jupiter these objects were no longer referred to as planets.
"Planet" or "Dwarf planet"
The term 'planet' is derived from an Ancient Greek term meaning 'wandering star'. In this sense, all points of light that wander in the sky can be called planets, including the small stuff. That said, dwarf planets clearly exist in a different environment than the major planets.
According to the 2006 International Astronomical Union (IAU) decision, a 'planet' must 1) orbit the sun, 2) be in hydrostatic equilibrium (massive enough for its own gravity to pull it into a shape where gravity and pressure are balanced everywhere, generally an approximately spherical shape), and 3) have cleared the neighbourhood around its orbit.
'Planets', often called the 'major planets', must meet all three criteria. 'Dwarf planets' are objects that meet the first two criteria, but fail the third one. Under this definition, Pluto, Ceres, Haumea, Makemake, and Eris are classified as dwarf planets. 'Small solar system bodies', also called 'minor planets', are objects that meet only the first criterion. The Minor Planet Center maintains a catalogue of the minor and dwarf planets. This definition obviously doesn't address extra-solar planets.
Clearing the neighbourhood
Pluto clearly fails the third criterion. However you try to divide things up, there is a big gap between the major planets and what the IAU calls dwarf planets. For example, if you take the mass of any of the major planets and divide it by the sum of the mass of everything else nearby (everything with an orbit that crosses the planet's orbit), you get a number 2.4x104 (24 000) or greater. If you do the same thing for Pluto you get ~0.33. See Wikipedia:Clearing the neighbourhood.
Before you say 'But Neptune hasn't cleared its neighbourhood either!' consider this analogy: You wipe down your counter top with your favourite anti-bacterial cleaner and in doing so kill 99% of the germs. You thus consider your counter clean. You don't have to kill every last germ to have cleaned your counter. Likewise, to have 'cleared its neighbourhood' a planet must scattered most small debris away from its orbital region, but isn't required to have gotten rid of everything.
What we call Pluto does not change what it is, and what it is is fascinating.
A note on Pluto's orbit
Pluto is in a resonance with Neptune: it goes around the sun twice every time Neptune goes around three times. This resonance is the reason that Pluto can come closer to the sun than Neptune without worrying about running in to Neptune. Neptune just isn't nearby when Pluto comes to perihelion. This image shows the path of Pluto over several orbits in the frame where Neptune's position is held constant.
What makes Pluto important?
Dwarf planets are not less important than the major planets. Indeed, dwarf planets can dramatically improve our understanding of planets in general (major, dwarf, and minor). Dwarf planets didn't progress as far along the planet formation process as the major planets did, and thus offer key perspective on planet formation. Also, dwarf planets experience some of the processing major planets do, but either not to as great an extent or these processes might manifest somewhat differently. In any case, we can better understand the underlying processes of tectonics, atmospheres, etc by understanding how they operate in different conditions, such as on Pluto.
Pluto will be the first Kuiper Belt object that we have sent a spacecraft to. We have sent spacecraft to all the major planets, as well as several asteroids and a few comets. Neptune's moon Triton (visited by Voyager 2) is possibly a captured Kuiper Belt object, but as the moon of a gas giant it has had a rather different history than an object currently in the Kuiper Belt.
Pluto has a bulk composition not dissimilar to the typical comet. However, comets get processed every time they come near the sun. Unlike comets, Pluto has spent its entire history out in the far reaches of the solar system where its nice and cool.
Pluto's orbit is highly eccentric (non-circular), so it receives a different amount of light (and therefore energy) depending on where along its orbit it is. This difference in energy input results in a difference in surface and atmosphere temperature. By getting observations of Pluto we can further understand how atmospheres work under these conditions.
Pluto has moons! It's got one big moon, Charon, and four small moons: Nix, Hydra, Styx and Kerberos. The small moons were a surprise. When New Horizons launched, we had only recently discovered Nix and Hydra. We know that many Kuiper Belt objects are binaries (two KBOs of comparable size orbiting each other) and that many asteroids are binary or have moons. Charon is big enough that Pluto-Charon could (and often is) considered a binary. The additional presence of small moons is reminiscent of multi-planet systems around binary stars (e.g. Kepler-47).
These are only a few of the ways in which Pluto is interesting and important!
Why can't we just use Hubble to study Pluto?
Pluto is small. Imagine you are standing in Toronto trying to distinguish features on a 5ft 11in person standing in Vancouver. Hubble's resolution is 0.05 arcseconds (1 arcsecond = 1/3600 of a degree). Pluto's maximum apparent diameter is ~0.11 arcseconds, so in a raw Hubble image Pluto's area is a bit bigger than ~4 pixels. You can do slightly better by combining many images, but you can only get so far. Here is the best map of Pluto based on Hubble images.
Note: Pluto's dimness is not a problem for Hubble. Hubble is more than capable of observing things far dimmer than Pluto.
The close approach
On Tuesday July 14 at 11:50 UTC (07:50 EDT, 04:50 PDT) New Horizons will pass 12500 km above the surface of Pluto. This image depicts the flyby timeline, geometry, and closest approach distances. New Horizons is traveling at about 13.8 km/s. At that speed you could go from Toronto to Vancouver in 4 minutes, or from the Earth to the moon in 7.7 hours. New Horizons won't send us the data immediately (and even if it did, we'd have to wait 4.5 hours for the signal to get from New Horizons to us). Instead, the spacecraft will concentrate on taking data and store it to send back to us later. We should start receiving data from the flyby about a day after closest approach, but full transmission of the data will take a very long time.
Please ask questions and post New Horizons news!
I am funded by the Canadian Institute for Theoretical Astrophysics at the University of Toronto.
EDIT:
Introduction
Nearly all organisms -- from mighty E. coli to humble human -- experience some form of “intrinsic, progressive, and generalized physical deterioration that occurs over time” (Steve Austad’s definition of what is ‘aging’). Yet, despite the ubiquity of aging, the process is far from well understood. Why do we age? What are the molecular mechanisms that drive age-related changes? Can we agree on a definition of what aging is? And can we do anything to slow down, stop or even reverse the process? These are all open questions in the field of biogerontology.
This contribution to the /r/science Discussion Series will introduce a critical framework for understanding the biology of aging and guide readers through an introduction to experimental gerontology – the field of research dedicated towards understanding the molecular mechanisms that drive aging, and trying to identify strategies and therapies for extending healthy lifespans. Hopefully this will generate a vigorous discussion about what aging is and what we (scientists and the general public) can do about it!
Let’s start with some common questions:
Why animals don’t live forever (or even really, really long times).
On the face of it, it would seem that a longer lifespan would be adaptive – more time on earth means more time to procreate and produce more offspring, thereby improving evolutionary fitness. The work of several evolutionary biologists – namely Haldane, Williams and Medawar – provide insight into this question. The basic idea is that in the natural world, animals die from predation and accidents. That is, there is an extrinsic limit to their expected lifespan. What this means, practically, is that genes that would confer fitness and longevity much beyond this expected lifespan are largely ignored by natural selection (because the animal is dead before the genes can confer a selective benefit). As such, longevity tends to only be selected for when a species decreases it’s extrinsic mortality rate (for example, by growing larger, evolving wings, or moving to environments with fewer predators – all changes in life history traits that would likely lower the rate at which species die extrinsically). Consistent with this idea, a general trend in biology is that larger animals have longer maximum lifespans than shorter animals; birds have longer maximum lifespans than similarly sized wingless species; and animals in predator-free environments have longer maximum lifespans than closely related species in predator-rich environments.
Fine. We can’t live forever. But why do we have to fall apart as we get older?
There are a couple of different theories that try to explain this question, with mutation accumulation theory, the theory of antagonistic pleiotropy, and the disposable soma theory being the most widely accepted in the biological community. It is important to note that none of these theories are mutually exclusive with each other, and they all are likely to be important in some way or another. The one I am most partial to is antagonistic pleiotropy, which states that traits which are good for animals when they are young are not always good for the animal when they are older. And since natural selection is more powerful in younger animals (as discussed above), this can lead to the accumulation of traits which would favor the phenotype that we call “aging” late in an animal’s life. An example of this would be a gene/series of genes that accelerates the rate at which an animal grows. You can imagine that this would lead to a bigger animal, more likely to ward off predators and hence more evolutionarily fit than any smaller member of its species. As such, it is likely to be selected for. However, this gene/series of genes may have enabled faster growth by removing control of the cell cycle, allowing for faster cellular proliferation. It is not too hard to imagine that this would increase an animal’s predisposition to cancer (an age-related disease). While cancer is obviously bad, most animals don’t develop cancer until late in life, after they have already reproduced. So natural selection doesn’t have as much an opportunity to select against the “cancer-causing’ aspect of this trait. It is easy to conceive of other “evolutionary traps” that would result in other aging phenotypes – heart problems, graying hair etc.
What is aging, at the molecular level?
An awesome review on the topic proposes several major hallmarks of aging: genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient-sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication. There isn’t time to go into each of these in detail (although feel free to discuss them below!), but in their own way, each of these hallmarks has been experimentally proven to drive aging phenotypes in multiple model organisms. Understanding how these pathways work, and how they are perturbed over time, is critical for anyone attempting to design interventions that will slow, stop or reverse the aging process.
SENS, a popular but somewhat controversial research group, advocates for a similar list of aging factors: cell loss and cell atrophy, cancerous cells, mitochondrial mutations, death-resistant cells, extracellular matrix stiffening, extracellular aggregates, and intracellular aggregates. The organization even offers a plan for attacking and surmounting these causes of aging. The SENS organization is popular in the general public, but a little controversial in the scientific community for failing to produce meaningful results. We can talk about why in more detail if anyone is interested.
Is it possible to slow, stop or reverse the aging process?
No single intervention has made it into the clinic with the express purpose of ameliorating the aging process, a number of preclinical animal models give hope to the idea that it may be possible to design therapeutic strategies that can attenuate the aging process, or at least specific components of what we call the aging phenotype. Here I will review some of the genetic and pharmacological approaches employed by researchers to extend animal lifespans.
The most robust method for extending lifespan and delaying aging experimentally is dietary restriction. In almost all animal models tested – yeast, fruit fly, nematode, mouse, rat, and even monkey – some form of dietary restriction (cutting calories, or certain components of the diet) improves maximum and mean lifespans and delays the onset of multiple age-related pathologies. While this is certainly the most robust mechanism in the literature for extending lifespan, it is worth noting that the magnitude of the effect varies fairly dramatically across species (worms tend to experience a 200% increase in lifespan, whereas mice typically experience at best a 40% increase in lifespan), and even within species, depending on experimental conditions (some strains of mice appear not to benefit from caloric restriction, while in other strains only one gender benefits from caloric restriction; one group of researchers report monkeys benefit from caloric restriction, while another group reports no benefit etc.).
A number of genetic manipulations have also been reported to extend lifespan in mice. For example, overexpression of catalase, Klotho, and Sirt6, or down-regulation of Foxo, growth hormone, and TOR signaling tend to offer relatively minor (10-30%) increases in longevity. These genetic modifications typically also delay multiple aging phenotypes as well.
These dietary and genetic studies have informed several pharmacological endeavors. If overexpression or down regulation of a gene results in lifespan modulation, researchers reasoned that it may be possible to design drugs that can modulate these signaling pathways in the same direction. One longevity drug candidate that has exhibited preclinical success is rapamycin. Rapamycin inhibits the mTOR pathway. Multiple studies in mouse models have demonstrated that rapamycin can extend murine lifespan by upwards of 15% and simultaneously delay the onset of multiple age-related pathologies. Interestingly, rapamycin is actually used in humans as an immunosuppressant to promote renal engraftment after transplantation. When researchers looked at rapamycin-treated cohorts, they found fewer age-related pathologies (relative to patients who received different immunosuppressants), such as lower rates of cancer. While it is unlikely that rapamycin is ideal for life-long use, due to side effects, researchers are working on developing compounds that mimic rapamycin without any of the long-term side effects.
More questions I think are interesting:
Given the immensity of the task, is it possible to run a clinical trial for anti-aging drugs? What would the endpoints be? What would the biomarkers be? How would you pay for it?
How do some animals (such as hydra and certain jellyfish) seem to live forever? How do some animals (such as naked mole rat) never get cancer?
Parabiosis. Not a question, per se, but dang that stuff is cool.
What can centenarians teach us about living really long lives?
What are the implications of an increasingly aging human population? What are the ethical concerns related to technologies that extend healthy lifespan? What about transhumanism?
What is the future of anti-aging interventions? Stem cell therapy? Small molecule drugs? Living healthy? Downloading our consciousness onto computers?
What role does the immune system play in aging? Does it go a bit haywire? Does it stop working? Or maybe a bit of both?
Final thoughts
There is a parable, The Fable of the Dragon-Tyrant, that several prominent researchers use when discussing the urgency of aging research. It really is a beautiful story, and I hope you take the time to read it. Personally, I’ve found aging to be a fascinating field of research. It is an endlessly interesting biological question – there is so much variability in aging. Across species we find animals who live only fleeting lifespans (such as the fruit fly or the shrew), we find animals who have found ways to fight aging (naked mole rats, blind mole rats, humans) and we even occasionally find animals who appear immune to aging (hydra). At the same time, aging is also an urgent topic of medical research. The developed world has a rapidly aging population, and we are woefully underprepared for addressing the medical needs of this demographic. To put a number on it – the average person in the U.S. lives about 27,500 days. Finding ways to extend that number, especially if we can add “healthy days lived” to the queue is a goals that I think almost everyone can rally around.
I hope you enjoyed this primer on the biology of aging. Feel free to ask questions or start a discussion below!
At the American Chemical Society Meeting in March, I was interviewed about the Science AMA Series. This is the video the ACS staffers put together, I thought people would be interested in seeing it.
Link to the video on youtube:
This is before a webinar I am giving covering science discussions on reddit:
http://www.acs.org/content/acs/en/events/upcoming-acs-webinars/digital-media.html
(I'll post a link to the webinar on the day of so that people can easily find it.)
Hopefully reddittors find this interesting and informative as to our motivations and values. (spoiler: we're pro-science!)
Nate
In light of the recent article in Nature regarding the 3.3 Million year old stone tools found in Africa and the very long comment thread in this subreddit, a discussion of archaeological methods seems timely.
African Fossils.org has put together a really nice site which has movable 3D photos of the artifacts.
Some of the most common questions in the comment thread included;
- "Those look like rocks!"
- "How can we tell they are actually tools?"
- "How can they tell how old the tools are?"
Distinguishing Artifacts from Ecofacts
Some of the work co-authors and I have done was cited in the Nature paper. Building on previous work we were looking at methods to distinguish human-manufactured stone tools (artifacts) from natural rocks (called ecofacts). This is especially important at sites where the lithic technology is rudimentary, as in the Kenyan example cited above or several potentially pre-Clovis sites in North America.
Our technique was to use several attributes of the tools which are considered to appear more commonly on artifacts rather than ecofacts because they signify intentionality rather than accidental creation.
These included,
- Flakes of a similar size
- flakes oriented and overlapping forming an edge
- bulbs of percussion indicating strong short term force rather than long term pressure
- platform preparation
- small flakes along the edge showing a flintknapper preparing and edge;
- stone type selection
- use wear on edges, among others
We tested known artifact samples, known ecofact samples and the test sample and compared the frequency of these attributes to determine if the test samples were more similar to artifacts or ecofacts.
This method provides a robust way to differentiate stone tools from naturally occurring rocks.
Other Points for Discussion
The press received by the Nature article provides a unique teaching opportunity for archaeologists to discuss their methods with each other and to help laypeople better understand how we learn about prehistory.
Other topics derived from the Nature article could include;
- dating methods
- excavation methods
- geoarchaeology
- interpretive theory
I will answer anything I can but I hope other anthropologists in this subreddit will join in on the discussion.
Note: I have no direct affiliation with the work reported in Nature so will only be able to answer general questions about it.
Impelled in part by some of the dismissive comments I have seen on /r/science, I thought I would take the opportunity of the new Science Discussion format to wade into the question of whether psychology should be considered a ‘real’ science, but also more broadly about where psychology fits in and what it can tell us about science.
By way of introduction, I come from the Skinnerian tradition of studying the behaviour of animals based on consequences of behaviour (e.g., reinforcement). This tradition has a storied history of pushing for psychology to be a science. When I apply for funding, I do so through the Natural Sciences and Engineering Research Council of Canada – not through health or social sciences agencies. On the other hand, I also take the principles of behaviourism to study 'unobservable' cognitive phenomena in animals, including time perception and metacognition.
So… is psychology a science? Science is broadly defined as the study of the natural world based on facts learned through experiments or controlled observation. It depends on empirical evidence (observed data, not beliefs), control (that cause and effect can only be determined by minimizing extraneous variables), objective definitions (specific and quantifiable terms) and predictability (that data should be reproduced in similar situations in the future). Does psychological research fit these parameters?
There have been strong questions as to whether psychology can produce objective definitions, reproducible conclusions, and whether the predominant statistical tests used in psychology properly test their claims. Of course, these are questions facing many modern scientific fields (think of evolution or string theory). So rather than asking whether psychology should be considered a science, it’s probably more constructive to ask what psychology still has to learn from the ‘hard’ sciences, and vice versa.
A few related sub-questions that are worth considering as part of this:
1. Is psychology a unitary discipline? The first thing that many freshman undergraduates (hopefully) learn is that there is much more to psychology than Freud. These can range from heavily ‘applied’ disciplines such as clinical, community, or industrial/organizational psychology, to basic science areas like personality psychology or cognitive neuroscience. The ostensible link between all of these is that psychology is the study of behaviour, even though in many cases the behaviour ends up being used to infer unseeable mechanisms proposed to underlie behaviour. Different areas of psychology will gravitate toward different methods (from direct measures of overt behaviours to indirect measures of covert behaviours like Likert scales or EEG) and scientific philosophies. The field is also littered with former philosophers, computer scientists, biologists, sociologists, etc. Different scholars, even in the same area, will often have very different approaches to answering psychological questions.
2. Does psychology provide information of value to other sciences? The functional question, really. Does psychology provide something of value? One of my big pet peeves as a student of animal behaviour is to look at papers in neuroscience, ecology, or medicine that have wonderful biological methods but shabby behavioural measures. You can’t infer anything about the brain, an organism’s function in its environment, or a drug’s effects if you are correlating it with behaviour and using an incorrect behavioural task. These are the sorts of scientific questions where researchers should be collaborating with psychologists. Psychological theories like reinforcement learning can directly inform fields like computing science (machine learning), and form whole subdomains like biopsychology and psychophysics. Likewise, social sciences have produced results that are important for directing money and effort for social programs.
3. Is ‘common sense’ science of value? Psychology in the media faces an issue that is less common in chemistry or physics; the public can generate their own assumptions and anecdotes about expected answers to many psychology questions. There are well-understood issues with believing something ‘obvious’ on face value, however. First, common sense can generate multiple answers to a question, and post-hoc reasoning simply makes the discovered answer the obvious one (referred to as hindsight bias). Second, ‘common sense’ does not necessarily mean ‘correct’, and it is always worth answering a question even if only to verify the common sense reasoning.
4. Can human scientists ever be objective about the human experience? This is a very difficult problem because of how subjective our general experience within the world can be. Being human influences the questions we ask, the way we collect data, and the way we interpret results. It’s likewise a problem in my field, where it is difficult to balance anthropocentrism (believing that humans have special significance as a species) and anthropomorphism (attributing human qualities to animals). A rat is neither a tiny human nor a ‘sub-human’, which makes it very difficult for a human to objectively answer a question like Does a rat have episodic memory, and how would we know if it did?
5. Does a field have to be 'scientific' to be valid? Some psychologists have pushed back against the century-old movement to make psychology more rigorously scientific by trying to return the field to its philosophical, humanistic roots. Examples include using qualitative, introspective processes to look at how individuals experience the world. After all, astrology is arguably more scientific than history, but few would claim it is more true. Is it necessary for psychology to be considered a science for it to produce important conclusions about behaviour?
Finally, in a lighthearted attempt to demonstrate the difficulty in ‘ranking’ the ‘hardness’ or ‘usefulness’ of scientific disciplines, I turn you to two relevant XKCDs: http://xkcd.com/1520/ https://xkcd.com/435/
Edit: There are some good questions in here related to building damage, culture, etc that I can't really answer, so I'm very much hoping that other experts will chime in.
This is a thread to discuss science related to the Nepal earthquakes. I will give a geophysical perspective, and it would be great if people from other fields, such as civil engineering or public health, could chime in with other info.
There have been dozens of earthquakes in Nepal in the past few weeks, the biggest being the magnitude 7.8 Gorkha earthquake and yesterday's magnitude 7.3 earthquake. Tectonically, this is a collision zone between the Indian subcontinent and Asia. This collision zone is unique, at least with our current configuration of tectonic plates, because the Indian plate is actually sliding under the Eurasian plate. When this happens at an ocean-land or ocean-ocean boundary it's called subduction. In a usual subduction zone, oceanic crust from one side of the collision sinks below crust on the other side, and goes deep into the mantle. However, in the India-Eurasia case, both sides are continental crust. Continental crust is less dense than oceanic crust and cannot sink. Therefore, the Indian plate diving underneath the Eurasian plate floats on top of the mantle, creating an area of double-thick crunched up crust, AKA the Tibetan plateau. The main sliding boundary between the Indian and Eurasian tectonic plates is called the Main Himalayan Thrust, and this is where we believe these two largest earthquakes occurred. These earthquakes are therefore "helping" India move further underneath Tibet.
The danger of this area has been long recognized within the geophysical community. A previous large earthquake occurred just to the southeast along the same thrust in 1934. Here is a historical map of shaking intensities from the 1934 quake with the location of the M 7.8 Gorkha quake indicated by the white box.
The Gorkha earthquake was recorded nicely with InSAR. InSAR is a satellite based method in which radar is swept over an area before and after an earthquake, and the two images are artificially "interfered" with each other, producing interference fringes that outline changes between the two time periods. The InSAR results can be viewed here and indicate that approximately 4-5 meters of slip occurred in an oblong patch.
The recent M 7.3 earthquake could be considered an aftershock of the M 7.8, but it's a bit odd. The general rule of thumb is that the largest aftershock should be about 1 magnitude unit less than the main shock, or about a 6.8. We also expect this largest aftershock to occur relatively soon after the main shock, within a few days. So, this aftershock was both later than expected and larger than expected, but not unreasonably so. It appears that the general pattern over the last 2 weeks has been a southeastern migration of earthquakes, which could indicated some kind of aseismic, slower slip driving this migration (purely speculative on my part).
For more info, the following links may be helpful: Geology of the Himalayas USGS pages for the quakes: one two
In my excitement for the new Science Discussions, I posted this a couple days ago before I learned of the new discussion flair. I wanted to repost (and summarize) in order to take advantage of the proper format and audience.
Th original post is here, and already has some great comments from /u/sythbio, /u/biocuriousgeorgie, and others.
In short, a gene-drive refers to a selfish genetic element that has the capacity to copy itself. CRISPR/Cas gene-drives have been shown to be extremely efficient and site specific; researchers have also demonstrated the ability for these drives to propagate through populations (including WT strains in yeast) with >95% inheritance.
The Church lab has only worked with these elements in yeast, but recently a group at Berkeley have shown that these elements work very well in fruit flies. It’s easy to dismiss breakthrough discoveries that have only been validated in yeast and fruit flies, but in this case, all of the necessary components for this system have been demonstrated to work in mammalian hosts; that includes human cell lines, live monkeys, and human embryos. The simplicity and efficiency of this system is disturbingly amazing.
Church Lab Inc. has spearheaded this technology and debate, but they’ve been working in yeast for a number of technical and ethical reasons. They’ve also contributed to the public letter proposing a ban on human genome engineering until we really understand the implications and effects. Church interview. On the other hand, I’ve recently had a number of anecdotal conversations about the desperation of ecologists in recent times; invading species all across the world are decimating habitats and native populations, and they have no good recourse. gene-drives which specifically target invasive species could revolutionize ecological management and save countless native species from extinction. Also, mosquitos. (see links)
Some excellent followup questions are (courtesy of /u/sythbio):
- Although both labs (Church and UCSD) demonstrated high drive efficiency at around 97-99%, and the Church lab demonstrated high sequence fidelity of the drive and an adjacent load gene, I would be interested to analyze fidelity (of the drive, the load, and the target sites) over many generations. Can anyone comment on the natural mutation rate of natural selfish DNA elements? How do they maintain their fidelity (DNA sequence as well as functional fidelity, if it can be maintained with sequence degeneracy)? Would we expect Cas9-based gene drives to be any different?
Can anyone with experience speak to whether, in the context of ecological bioengineering, is the documented, low off-target rates for CRISPR insertion even a concern?
On a cursory read of the Church gene drive manuscript, I did not see any analysis of off-target effects. Did I miss this, or does anyone know if off-target mutations/insertions occurred in the Church or UCSD work, or if this was even assessed?
Would any experts be willing to comment on the Chinese human embryo gene drive effort? I work with Cas9, so I'm not interested in the technical details--I would like to know others' opinions with respect to experimental design, and if the research (coming from a low impact journal) was performed rigorously to avoid the problems that they discovered in their research, like low HDR efficiency, off-target cleavage, and a homologous gene acting as a repair donor. In other words, does anybody think that the problems they experienced were due to poor experimental design and execution, or are these problems expected to be characteristic of Cas9-based gene drives in general.
Relevant reading:
Link
more link!
even more interesting link
ok, enough church lab links
non-US human embyro modification
EDIT: Link formatting
Today we announce a new feature in /r/science, Science Discussions. These are text posts made by verified users about issues relevant to the scientific community.
The basic idea is that our practicing scientists will post a text post describing an issue or topic to open a discussion with /r/science. Users may then post comments to enter the conversation, either to add information or ask a question to better understand the issue, which may be new to them. Knowledgeable users may chime in to add more depth of information, or a different point of view.
This is, however, not a place for political grandstanding or flame wars, so the discussion will be moderated, be on your best behavior. If you can't disagree without being disagreeable, it's best to not comment at all.
That being said, we hope you enjoy quality discussions lead by experience scientists about science-related issues of the day.
Thanks for reading /r/science, and happy redditing!
Hi Reddit!
We are Tom Baden and Andre Maia Chagas, and we are neuroscience researchers at the Centre for Integrative Neuroscience (CIN) at the University of Tübingen, Germany. We are also part of TReND in Africa, a scientist-run NGO aimed at fostering science education and research on the African continent. We are active in the Maker-Movement where we aim to promote the use of open source software and hardware approaches in research and education. We recently published a community page in PLOS Biology on the use of consumer oriented 3-D printing and microcontrollers for the building of sophisticated yet low-cost laboratory equipment, or “Open Labware”. We argue that today it is possible to establish a fully operational “home-factory” for well below 1,000 USD. This is opening up new grounds for scientists, educators as well as hobbyists outside the traditional scientific establishment to make real contributions to the advancement of science tools and science in general, while at the same time allowing grant money to be used more effectively also at the financially more established institutions. We actively promote these ideas and tools at training courses at universities across Africa, while our co-authors and colleagues from the US-based Backyard Brains are running similar activities across Latin America.
We will be answering your questions at 1pm EDT (10 am PDT, 6 pm UTC). Ask us anything!
Don’t forget to follow us (TReND) on facebook and twitter! (Andre’s twitter here) Further reading: Open Source lab – by Joshua M Pearce