We all know that a lot of people overvalue their N=1 self-study: “I started eating only meat and my arthritis was cured!” Unfortunately, the weakness of this data is often attributed to the fact that it is an anecdote, and so, anecdotes are wrongly vilified as “the bottom of the evidential hierarchy.” In the case of a diet change, the remission of arthritis could be due to any number of reasons: time (perhaps it would have healed on its own, diet change or not), perhaps the person had a reduction in stress leading to lower inflammation, perhaps the person acquired a higher pain tolerance, perhaps the person even lied.
Anecdotes can actually be incredibly strong evidence, and Bayes Theorem tells...
Cross-posted from Cold Takes
I'm interested in the topic of ideal governance: what kind of governance system should you set up, if you're starting from scratch and can do it however you want?
Here "you" could be a company, a nonprofit, an informal association, or a country. And "governance system" means a Constitution, charter, and/or bylaws answering questions like: "Who has the authority to make decisions (Congress, board of directors, etc.), and how are they selected, and what rules do they have to follow, and what's the process for changing those rules?"
I think this is a very different topic from something like "How does the US's Presidential system compare to the Parliamentary systems common in Europe?" The...
Hmm, I'm not sure I quite have the intuition for what 'Aristocracy' means in your link - it seems the Greek definition differs from the Hobbesian one for example.
But I think the answer is that I am outlining a problem, and there are many different potential solutions. To use the crypto example, both proof-of-stake and proof-of-work could be valid solutions, even though they are quite distinct. So while perhaps Aristocracy might be one solution I'm not sure it would be the only one, unless defined very broadly.
Using Life-Years (DALYs) as a unit of account rather than tons of CO2 could be an effective way to fund health care in developing countries.
I am looking for support and advice from the EA Community on:
EA has created a great reputation for funding the highest impact work with rigorous evidence. Because of this, funding is concentrated in established interventions with rigorous impact. This project would make it easier for a wider variety of new health...
I see this as a big step towards using value aligned markets to address inefficiencies in charitable giving, and a much needed proof of concept for tokenomics in EA. The proof of concept seems to be alone worth the attention of domain experts in health care funding. I am also curious how this could be extended to applications addressing food insecurity.
Right as I was about to write this post, I saw Scott Alexander had made a post discussing existential risk vs. longtermism. Luckily Scott's post didn't tackle this question, so I can still write this post!
Disclaimer: I'm relatively new to Effective Altruism
Longtermism is pretty obvious to me: things that have long-term impact are much larger in effect than things with short-term impact, under utilitarianism or whatever other ethical system you adhere to.
I have never really understood existential risk, though. I still think studying existential risk factors are important, but only because basically every existential risk is also a major s-risk factor. I may not be using these terms exactly right based on this post; I view existential risk as "permanently limiting or completely extinguishing intelligent life", while...
Thank you for the response!
Yeah I think I have the most problem with (4), something that I probably should have expressed more in the post.
It's true that humans are in theory trying to optimize for good outcomes, and this is a reason to expect utility to diverge to infinity. However, there are in my view equally good reasons utility to diverge to negative infinity- that being that the world is not designed for humans. We are inherently fragile creatures, only suitable to live in a world with specific temperature, air composition, etc. There are a lot... (read more)
The Centre for Effective Altruism (CEA) wants to explore how to make Community Building a more attractive career path to pursue long term, especially with regards to the role of CEA community building grantee (CBG). I’m interested in this question, and have done some initial analysis on it. So far I have looked at previous work on this topic, conducted exit interviews with former community builders, done a survey with current community builders and written a short report. (Note that my analysis mainly builds on input from organizers in city and national groups, not university groups.)
In this post I highlight some of the key findings, hoping to induce discussion and get input for further work on the topic.
There is good reason to believe that community building can...
As a really quick thought, I was just chatting with an aspiring community builder and we thought that (executive) director of community or something similar sounding could be worth considering. It might be worth looking at the tech community or similar to see their norms.
Open Philanthropy solicited reviews of my draft report “Is power-seeking AI an existential risk?” from various sources. Where the reviewers allowed us to make their comments public in this format, links to these comments are below, along with some responses from me in blue.
The table below (spreadsheet link here) summarizes each reviewer’s probabilities and key objections.
An academic economist focused on AI also provided a review, but they declined to make it public in this format.
FYI: Cell A7 in the spreadsheet says "Tarnsey" instead of "Tarsney"
I think it makes sense for Effective Altruists to pursue prioritization research to figure out how best to improve the wisdom and intelligence[1] of humanity. I describe endeavors that would optimize for longtermism, though similar research efforts could make sense for other worldviews.
For those interested in increasing humanity’s long-term wisdom and intelligence[1], several types of wildly different interventions are options on the table. For example, we could improve at teaching rationality, or we could make progress on online education. We could make forecasting systems and data platforms. We might even consider something more radical, like brain-computer interfaces or highly advanced pre-AGI AI systems.
These interventions share many of the same benefits. If we figure out ways to remove people’s cognitive biases, causing them to make better...
Sounds like very interesting work! As a Frenchman, it's encouraging to see this uptake in another "latin" European country. I think this analytic/critical thinking culture is also underdeveloped in France. I'm curious: in your project, do you make connections to the long tradition of (mostly Continental?) philosophical work in Italy? Have you encountered any resistance to the typically anglo-saxon "vibe" of these ideas? In France, it's not uncommon to dismiss some intellectual/political ideas (e.g., intersectionality) as "imported from America" and therefore irrelevant.
It seems to me that the effective altruism community has a tendency to overemphasize smarts and to underemphasize other important traits. (Some related remarks on this forum are found here, here, and here.) Yes, smarts do matter greatly, and IQ tests are indeed predictive of various outcomes and achievements. But something can be both important and overemphasized at the same time.
By analogy, vitamin C is no doubt necessary for our health, yet to focus on vitamin C to such an extent that one neglects most other vitamins can risk deficiencies of those other vitamins. A focus on smarts to the exclusion of other important traits and capacities may likewise lead to “deficiencies” along those other dimensions.
To clarify, my claim here is not that anyone holds the cartoonish...
Great post, thanks for sharing! I think you might find Igor Grossmann's work on the psychology of wisdom particularly interesting (https://igorgrossmann.com/projects/wisdom/), if you haven't already been exposed to it :)
Next week for The 80,000 Hours Podcast I'm again interviewing Will MacAskill. The hats he's wearing these days are:
• Author of 'What We Owe The Future' • Associate Professor in Philosophy at Oxford's Global Priorities Institute, and • Director of the Forethought Foundation for Global Priorities Research
What should I ask him?
Great set of questions!
I'm personally very interested in the question about educational interventions.
Wow, that essay explains strong anecdotes a lot better than I did. I knew about the low-variance aspect, but his third point and onwards made things clearer even for me. Thanks for the link!