Recent Discussion

Anecdotes can be weak evidence

We all know that a lot of people overvalue their N=1 self-study: “I started eating only meat and my arthritis was cured!” Unfortunately, the weakness of this data is often attributed to the fact that it is an anecdote, and so, anecdotes are wrongly vilified as “the bottom of the evidential hierarchy.” In the case of a diet change, the remission of arthritis could be due to any number of reasons: time (perhaps it would have healed on its own, diet change or not), perhaps the person had a reduction in stress leading to lower inflammation, perhaps the person acquired a higher pain tolerance, perhaps the person even lied.

Anecdotes can be strong evidence

Anecdotes can actually be incredibly strong evidence, and Bayes Theorem tells...

5nathan980009hGood post! Spencer Greenberg has a post with similar thoughts on this: https://www.spencergreenberg.com/2021/05/is-learning-from-just-one-data-point-possible/ [https://www.spencergreenberg.com/2021/05/is-learning-from-just-one-data-point-possible/]

Wow, that essay explains strong anecdotes a lot better than I did. I knew about the low-variance aspect, but his third point and onwards made things clearer even for me. Thanks for the link!

Cross-posted from Cold Takes

Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.

I'm interested in the topic of ideal governance: what kind of governance system should you set up, if you're starting from scratch and can do it however you want?

Here "you" could be a company, a nonprofit, an informal association, or a country. And "governance system" means a Constitution, charter, and/or bylaws answering questions like: "Who has the authority to make decisions (Congress, board of directors, etc.), and how are they selected, and what rules do they have to follow, and what's the process for changing those rules?"

I think this is a very different topic from something like "How does the US's Presidential system compare to the Parliamentary systems common in Europe?" The...

6Larks12hOne issue that I've rarely seem addressed directly is changing population sizes. Historically a lot of systems (e.g. one-man-one-vote democracy) have relied on the fact that adding voters is slow and expensive. But with reduced travel costs, artificial wombs, and eventually the possibility of digitally replicating people an arbitrary number of times, this assumption could cease to be the case. At that point dividing up rights on a per capita basis looks more like an invitation to abuse. This issue has historically been addressed by corporations through allocating votes proportionally to shares, not people, or by coins through proof-of-[work/stake]. In the future it could be as easy to multiple people as it is to multiple legal entities, email addresses or wallets.
2Charles He4hCan you explain how this viewpoint is substantively different than say, the Greek concept of "Aristocracy [https://en.wikipedia.org/wiki/Aristocracy]", which is seen as highly positive by the classical Greeks? The point of my question is to understand the net contribution of this idea (other the mechanics of the additional step of reducing of vote weights not to zero, and putting this all in a spreadsheet). It also suggests we can just examine the related literature, which should be pretty large?

Hmm, I'm not sure I quite have the intuition for what 'Aristocracy' means in your link - it seems the Greek definition differs from the Hobbesian one for example.

But I think the answer is that I am outlining a problem, and there are many different potential solutions. To use the crypto example, both proof-of-stake and proof-of-work could be valid solutions, even though they are quite distinct. So while perhaps Aristocracy might be one solution I'm not sure it would be the only one, unless defined very broadly.

Summary

Using Life-Years (DALYs) as a unit of account rather than tons of CO2 could be an effective way to fund health care in developing countries.

I am looking for support and advice from the EA Community on:

  • Use Case validation:
    • What are the challenges with funding health care currently?
    • Does this proposal effectively develop rigorous impact?
    • Have you funded a healthcare intervention before? Would you be willing have a phone call to share how you chose who to fund?
  • Connections:
    • Do you know anyone else working on this problem we should talk to?

Rationale for HealthCredit

EA has created a great reputation for funding the highest impact work with rigorous evidence. Because of this, funding is concentrated in established interventions with rigorous impact. This project would make it easier for a wider variety of new health...

I see this as a big step towards using value aligned markets to address inefficiencies in charitable giving, and a much needed proof of concept for tokenomics in EA. The proof of concept seems to be alone worth the attention of domain experts in health care funding. I am also curious how this could be extended to applications addressing food insecurity.

2Matt Brooks6hCongrats on winning the hackathon! Very Impressive! I'm excited to see how this project progresses, this seems like a great opportunity to improve the traditional funding and non profit sector without taking huge crazy leaps.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some question posts that could use more answers.

Right as I was about to write this post, I saw Scott Alexander had made a post discussing existential risk vs. longtermism. Luckily Scott's post didn't tackle this question, so I can still write this post!

Disclaimer: I'm relatively new to Effective Altruism

Longtermism is pretty obvious to me: things that have long-term impact are much larger in effect than things with short-term impact, under utilitarianism or whatever other ethical system you adhere to.

I have never really understood existential risk, though. I still think studying existential risk factors are important, but only because basically every existential risk is also a major s-risk factor. I may not be using these terms exactly right based on this post; I view existential risk as "permanently limiting or completely extinguishing intelligent life", while...

7Linch3hHi, welcome to the forum. You raise some interesting points. Some quick notes/counterpoints: 1. Not all existential risk is extinction risk. 1. Existential risk doesn't have an extremely clean definition, but in the extinction/doom/non-utopia [https://forum.effectivealtruism.org/posts/myp9Y9qJnpEEWhJF9/shortform?commentId=E7383cdAjeKBub3Ks] ontology, most longtermist EA's intuitive conception of "existential risk" is closer to risk of "doom" than "extinction" 2. Nuclear war isn't a direct existential risk, but it's an existential risk factor. 1. The world could be made scarier after large-scale nuclear war, and thus less hospitable for altruistic values (plus other desiderata) 3. AI may or may not kill us all. But this point is academic and only mildly important, because if unaligned AI takes over, we (humanity and our counterfactual descendants) have lost control of the future. 4. Almost all moral value in the future is in the tails (extremely good and extremely bad outcomes). 1. Those outcomes likely require optimization for, and it seems likely that our spiritual descendants optimize heavily for good stuff than bad stuff. 1. Bad stuff might happen incidentally (historical analogues include factory farming and slavery), but they aren't being directly optimized for, so will be a small fraction of the badness of maximally bad outcomess.

Thank you for the response!

Yeah I think I have the most problem with (4), something that I probably should have expressed more in the post.

It's true that humans are in theory trying to optimize for good outcomes, and this is a reason to expect utility to diverge to infinity. However, there are in my view equally good reasons utility to diverge to negative infinity- that being that the world is not designed  for humans. We are inherently fragile creatures, only suitable to live in a world with specific temperature, air composition, etc. There are a lot... (read more)

1James_Banks4hExistential risk might be worth talking about because of normative uncertainty. Not all EAs are necessarily hedonists, and perhaps the ones who are shouldn't be, for reasons to be discovered later. So, if we don't know what "value" is, or, as a movement, EA doesn't "know" what "value" is, a priori, we might want to keep our options open, and if everyone is dead, then we can't figure out what "value" really is or ought to be.

Summary

The Centre for Effective Altruism (CEA) wants to explore how to make Community Building a more attractive career path to pursue long term, especially with regards to the role of CEA community building grantee (CBG). I’m interested in this question, and have done some initial analysis on it. So far I have looked at previous work on this topic, conducted exit interviews with former community builders, done a survey with current community builders and written a short report. (Note that my analysis mainly builds on input from organizers in city and national groups, not university groups.)

In this post I highlight some of the key findings, hoping to induce discussion and get input for further work on the topic.

There is good reason to believe that community building can...

16Charles He10hThe Campus Specialist program was discontinued? The one announced ~4 months ago? This seemed like an important thing. (It seems like there are other ways to ask about this. I am biased to making a public comment, because it seems like good practice[1] [#fni8b7brnkr8p]). 1. ^ [#fnrefi8b7brnkr8p]The alternative is to ping people or get on a zoom call. But this is demanding of others time, especially since sometimes these contacts are not seen as entirely asynchronous. You would might need to ping multiple people, or otherwise babysit this issue by successively contacting people, and that isn't worth it for many people. When programs are cancelled like this, it's often for complicated reasons. While getting a personal account is useful (but costly), it's harder to share this with others. It seems better to create norms to encourage this announcement on the EA Forum. There is a good chance the parent comment is wrong/noise and this public comment should fix that.
1Vilhelm Skoglund11hThis is an intersting idea! I can see som practical / legal issues with having a organization with a few hired people in many different countries. But it should defintely work for the US and UK, where many community builders are based. Also it should work with "regional hubs" in other locations. And even though one might not be able to be technically hired, having a joint back office for many things just seems robustly good. Maybe EA Nordics can lead the way with some experiments here!

As a really quick thought, I was just chatting with an aspiring community builder and we thought that (executive) director of community or something similar sounding could be worth considering. It might be worth looking at the tech community or similar to see their norms.

Open Philanthropy solicited reviews of my draft report “Is power-seeking AI an existential risk?” from various sources. Where the reviewers allowed us to make their comments public in this format, links to these comments are below, along with some responses from me in blue. 

  1. Leopold Aschenbrenner
  2. Ben Garfinkel
  3. Daniel Kokotajlo
  4. Neel Nanda
  5. Nate Soares
  6. Christian Tarsney
  7. David Thorstad
  8. David Wallace
  9. Anonymous 1 (software engineer at AI research team)
  10. Anonymous 2 (academic computer scientist)

The table below (spreadsheet link here) summarizes each reviewer’s probabilities and key objections.

Screenshot of linked summary spreadsheet

An academic economist focused on AI also provided a review, but they declined to make it public in this format.

FYI: Cell A7 in the spreadsheet says "Tarnsey" instead of "Tarsney"

Summary

I think it makes sense for Effective Altruists to pursue prioritization research to figure out how best to improve the wisdom and intelligence[1] of humanity. I describe endeavors that would optimize for longtermism, though similar research efforts could make sense for other worldviews.

The Basic Argument

For those interested in increasing humanity’s long-term wisdom and intelligence[1], several types of wildly different interventions are options on the table. For example, we could improve at teaching rationality, or we could make progress on online education. We could make forecasting systems and data platforms. We might even consider something more radical, like brain-computer interfaces or highly advanced pre-AGI AI systems. 

These interventions share many of the same benefits. If we figure out ways to remove people’s cognitive biases, causing them to make better...

Sounds like very interesting work! As a Frenchman, it's encouraging to see this uptake in another "latin" European country. I think this analytic/critical thinking culture is also underdeveloped in France. I'm curious: in your project, do you make connections to the  long tradition of (mostly Continental?) philosophical work in Italy? Have you encountered any resistance to the typically anglo-saxon "vibe" of these ideas? In France, it's not uncommon to dismiss some intellectual/political ideas (e.g., intersectionality) as "imported from America" and therefore irrelevant.

It seems to me that the effective altruism community has a tendency to overemphasize smarts and to underemphasize other important traits. (Some related remarks on this forum are found here, here, and here.) Yes, smarts do matter greatly, and IQ tests are indeed predictive of various outcomes and achievements. But something can be both important and overemphasized at the same time.

By analogy, vitamin C is no doubt necessary for our health, yet to focus on vitamin C to such an extent that one neglects most other vitamins can risk deficiencies of those other vitamins. A focus on smarts to the exclusion of other important traits and capacities may likewise lead to “deficiencies” along those other dimensions. 

To clarify, my claim here is not that anyone holds the cartoonish...

Great post, thanks for sharing! I think you might find Igor Grossmann's work on the psychology of wisdom particularly interesting (https://igorgrossmann.com/projects/wisdom/), if you haven't already been exposed to it :)

15Linch6hHere are my own attempts to answer this: Qualitatively, I think the appropriate claim from both my understanding of the intelligence ? work performance literature and some other literature on related topics, plus personal impressions/anecdotes/intuitions goes Quantitatively, my current best estimate is that correlation between intelligence and impact* among self-identified highly-engaged EAs is ~0.55** (explains ~30% of variance). My guess is that we do not have substantial data to do better than ~0.7 (~50% of variance explained). I don't know whether other EAs agree with me here. My current guess is that numerically sensitive ones probably have numbers that aren't too far off (maybe slightly lower?), while people who are less numerically/statistically sensitive will initially claim correlations that are higher. However, this (if true) would likely be a general bias, rather than a intelligence-specific bias. I would further predict that EAs (at least ones who haven't read this comment) will systematically overestimate the importance of other predictors as well, across a wide range of fields. I think these numbers may seem pretty low compared to our intuitions for how important smarts are. I don't know how to reconcile these intuitions exactly, except to again note that there are many other fields where intuitions dramatically overestimate correlations relative to reality. *Here impact is operationalized loosely as "on a log-scale, what prediction-evaluation setups would say about someone's past impact five years from now." **precision of numbers do not imply confidence.
2samuel6hI've been thinking about this lately, especially since I've started to apply to EA-specific opportunities. It does seem like EA orgs use intelligence as a main filter for hiring, which makes sense given the work (and is far better than plain-old credentialism), but I sometimes wonder if they're filtering out valuable candidates who are more clever, empathetic or dogged than high IQ. Most EA organizations are small so I expect this will change as the community scales to become more inclusive to the full spectrum of skillsets. Note that this is a perspective from the outside looking in and is completely anecdotal. I could be mistaken.

Next week for The 80,000 Hours Podcast I'm again interviewing Will MacAskill. The hats he's wearing these days are:

• Author of 'What We Owe The Future' • Associate Professor in Philosophy at Oxford's Global Priorities Institute, and • Director of the Forethought Foundation for Global Priorities Research

What should I ask him?

(Here's interview one and two.)

Great set of questions! 

I'm personally very interested in the question about educational interventions. 

1Answer by johnburidan3hIs the birthrate of Western countries a long-term risk, given that even immigrants and developing countries also seem to have falling rates? And if so, what is it a risk of? What's the downside?
1Answer by Michael8h1 Will MacAskill mentions that "What We Owe The Future" is somewhat complimentary to "The Precipice". What can we expect to learn from "WWOTF" having previoulsy read "The Precipice"? 2 How would Will go about estimating the discount rate for the future? We shouldn't discriminate against future "just because", however we still need some estimate for a discount rate, because: a) there are other reasons for applying discount rate other than discrimination eg. "possibility of extinction, expropriation, value drift, or changes in philanthropic opportunities" (see https://forum.effectivealtruism.org/posts/3QhcSxHTz2F7xxXdY/estimating-the-philanthropic-discount-rate#Significance_of_mis_estimating_the_discount_rate [https://forum.effectivealtruism.org/posts/3QhcSxHTz2F7xxXdY/estimating-the-philanthropic-discount-rate#Significance_of_mis_estimating_the_discount_rate] ) b) not applying a discount rate at all makes all current charity etc. negligably effective compared to working towards better future - eg. by virtue of the future having much, much greater number of moral agents for which we can safeguard said future (people, animals, but also AIs/robots perhaps or some post-human or trans-human species). Not having any discount rate would completely de-prioritize all current charity, which is what a lot of EAs would not agree with. In other words: How do we divide our resources (time, attention, money, career etc.) between short-term and long-term causes? 3 What are the possible criticisms that the book could receive - both from within and from the outside of EA community? 4 To which extent the book will discuss value shift/drift? It seems an interesting topic, which also appears not to be discussed very extensively in other EA sources 5 What comes next after "WWOTF"? If another book, what will it be about? 6 What is Will's stance on War in Ukraine? How does it contribute to x-risks, s-risks and how can it influence the future (incl. deep future)? It appears to be o