Recent Discussion

Mailing list for the new UK Conservative Party group on China.

Will probably be worth signing up to if that's your area of interest.

https://chinaresearchgroup.substack.com/p/coming-soon

Please comment any other places people could find mailing lists or good content for EA related areas.

TL;DR:

  • It's sometimes reasonable to believe things based on heuristic arguments, but it's useful to be clear with yourself about when you believe things for heuristic reasons as opposed to having strong arguments that take you all the way to your conclusion.
  • A lot of the time, I think that when you hear a heuristic argument for something, you should be interested in converting this into the form of an argument which would take you all the way to the conclusion except that you haven't done a bunch of the steps--I think it's healthy to have a map of all the argumentative steps
... (Read more)

The common attitude was something like "we're utilitarians, and we want to do as much good as we can. EA has some interesting people and interesting ideas in it. However, it's not clear who we can trust; there's lots of fiery debate about cause prioritization, and we just don't at all know whether we should donate to AMF or the Humane League or MIRI. There are EA orgs like CEA, 80K, MIRI, GiveWell, but it's not clear which of those people we should trust, given that the things they say don't always make sense to us, and they have different enough bottom l

... (read more)
2Max_Daniel4hThanks, I think this is a useful clarification. I'm actually not sure if I even clearly distinguished these cases in my thinking when I wrote my previous comments, but I agree the thing you quoted is primarily relevant to when end-to-end stories will be externally validated. (By which I think you mean something like: they would lead to an 'objective' solution, e.g. maths proof, if executed without major changes.) The extent to which we agree depends on what counts as end-to-end story. For example, consider someone working on ML transparency claiming their research is valuable for AI alignment. My guess is: * If literally everything they can say when queried is "I don't know how transparency helps with AI alignment, I just saw the term in some list of relevant research directions", then we both are quite pessimistic about the value of that work. * If they say something like "I've made the deliberate decision not to focus on research for which I can fully argue it will be relevant to AI alignment right now. Instead, I just focus on understanding ML transparency as best as I can because I think there are many scenarios in which understanding transparency will be beneficial.", and then they say something showing they understand longtermist thought on AI risk, then I'm not necessarily pessimistic. I'd think they won't come up with their own research agenda in the next two years, but depending on the circumstances I might well be optimistic about that person's impact over their whole career, and I wouldn't necessarily recommend them to change their approach. I'm not sure what you'd think, but I think initially I read you as being pessimistic in such a case, and this was partly what I was reacting against. * If they give an end-to-end story for how their work fits within AI alignment, then all else equal I consider that to be a good sign. However, depending on the circumstances I might still think the best long-term str
2Max_Daniel5hYes, agree. Though anecdotally my impression is that Wiles is an exception, and that his strategy was seen as quite weird and unusual by his peers. I think I agree that in general there will almost always be a point at which it's optimal to switch to a more end-to-end strategy. In Wiles's case, I don't think his strategy would have worked if he had switched as an undergraduate, and I don't think it would have worked if he had lived 50 years earlier (because the conceptual foundations used in the proof had not been developed yet). This can also be a back and forth. E.g. for Fermat's Last Theorem, perhaps number theorists were justified in taking a more end-to-end approach in the 19th century because there had been little effort using then-modern tools; and indeed, I think partly stimulated by attempts to prove FLT (and actually proving it in some special cases), they developed [https://en.wikipedia.org/wiki/Ernst_Kummer#Mathematics] some of the foundations of classical algebraic number theory. Maybe then people had understood that the conjecture resists attempts to prove it directly given then-current conceptual tools, and at this point it would have become more fruitful to spend more time on less direct approaches, though they could still be guided by heuristics like "it's useful to further develop the foundations of this area of maths / our understanding of this kind of mathematical object because we know of a certain connection to FLT, even though we wouldn't know how exactly this could help in a proof of FLT". Then, perhaps in Wiles's time, it was time again for more end-to-end attempts etc. I'm not confident that this is a very accurate history of FLT, but reasonably confident that the rough pattern applies to a lot of maths.

Remix of: Purchase Fuzzies and Utilons Separately

It can be tough as an EA to watch something urgent and important happen, while not seeing any relevant giving opportunity as effective as the ones you're already helping with. You may feel guilty for not helping the visible crisis, and also feel guilty if you helped the visible crisis at the expense of helping a larger problem.

Here's my personal way of dealing with it:

1. Figure out your anticipated EA donation amount for this year. (Take into account your financial circumstances- this is a weird year.) Leave that amount alone- don&apos...; (Read more)

As a matter of pragmatic trade-offs and community health, I broadly agree with this. However, I do also think it's important to point out that you[1] don't have to throw out all your EA principles when making "emotional" donating decisions. If it's necessary for your happiness to donate to cause area , you can still try to make your donation to as effective as possible, within your time constraints.

I suspect that the best way to do this is often to think about how narrow the cause area you're drawn to actually is. Would you feel bad if you donated to a

... (read more)
5warrenjordan8hI'm new to EA and this was a great reminder. I've had this on-and-off internal conflict of donating to EA vs non-EA cause areas. From a personal finance framing, I have EA donations as one line item and "Random Acts of Kindness" as another line item in my monthly budget (e.g. ranging from paying for a friend's meal to donating to non-EA cause area such as criminal justice reform). Side note: What was your decision-making process for choosing to donate to Campaign Zero? I'm trying assess where my donations would have the most impact [https://forum.effectivealtruism.org/posts/GuQcCnscFq2hfaegN/what-are-some-good-charities-to-donate-to-regarding-systemic] . I'm hesitant to donate to them compared to other organizations that Open Phil has vetted through their criminal justice reform grants [https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform], as well as their recent letter to interested donors as a result of the protests [https://docs.google.com/document/d/1GGgEZ8ebFd6--C4wLeJV9XrX1OPPg40NL6F1QDo53Bs/edit] .

Crossposting from the Effective Altruism community on Reddit. Thought it may be helpful to have a discussion here as well for those who don't frequent r/EffectiveAltruism.

For those who are thinking about how they can leverage their donations towards this cause area, where should we be donating to?

Bail funds are getting the most media attention right now, with the Minnesota Freedom Fund receiving $20M. With that, I'm not sure if there is a funding need right now for bail funds, compared to other neglected organizations in the same cause area. I'm also not sure on how to compare... (Read more)

I think we're in such an early stage with limited access to data that my intuition is - make some experiments and monitor closely, plus look for 'meta' opportunities that multiply impact - giving to ActBlue itself to scale up is a bet that they will facilitate a lot more than the tens of millions of $ they have raised already, and is acknowledging that better opportunities may arise in the near future (but will still be funded through that platform)

In terms of personal, counterfactual donations in addition to my 'normal' EA donatio... (read more)

16warrenjordan14hAs a person still new to EA, it was disheartening to see the downvotes. You can see in my post history that I rely on this community to be educated and engaged on EA, including how I can apply it to my life. After I saw the downvotes, it gave me the perception of exclusivity in this community. I'm glad I was made aware that there was a duplicate question, which I apologize for missing. Yet, I'm a little apprehensive now of posting anything that doesn't seem to fit the bucket of EA cause areas. The purpose of my post wasn't to give more attention to a non-EA cause. I want to apply the principles and concepts of EA so that I and others can make an informed, confident decision on how our dollars can make the greatest amount of impact in this specific cause area. If this community is only receptive and knowledgeable of EA cause areas, such that discussions around non-EA causes won't provide meaningful value, then please tell me so that I can engage in a different community.
1willbradshaw2hJust to point out, at the time of writing, that this question is now at 41 karma, which is pretty good. So whoever was downvoting it at the beginning appears to have been outvoted. :-) As I said in my other comment, I think this is a good question, well-phrased and thoughtful, and I'd be happy to see more like it on the Forum. Thank you for contributing here.

Knowing that a large people donate inefficiently and that because of scope insensitivity and other thinking errors/bias/heuristics, some organisations do well by (intentionally) being inefficient. What counts is not necessarily the relative impact of cost/impact but the absolute impact.


An example:

A transparent Organization 1 focusing on increasing the cost/impact uses honest advertising but low social reward incentives therefor is able to generate 300K dollar per year. Being very efficient, per 10 dollar, one lifeyear* is saved.

Equals 30K lifeyears* saved.

Another Organization 2 focuses on inc... (Read more)

Thanks for your extensive answer and insight. What you are saying makes sense, especially the points of charities competing for limited resources and the fact that the best charities are many times more-cost effective. The latter makes it incredibly difficult to make the example of Organisation 2 work. Considering that the top intervention is over ten times more cost-effective than the average, focusing on cost-effectiveness seems to be the best choice currently! (imagining to increase the amount of people spending/amount of dollar spending by 900% is comp... (read more)

I have recently published a book on suffering-focused ethics. The following is a short description:

The reduction of suffering deserves special priority. Many ethical views support this claim, yet so far these have not been presented in a single place. Suffering-Focused Ethics provides the most comprehensive presentation of suffering-focused arguments and views to date, including a moral realist case for minimizing extreme suffering. The book then explores the all-important issue of how we can best reduce suffering in practice, and outlines a coherent and pragmatic path forward.

An invitation fo

... (Read more)

Thanks, Mike!

Great questions. Let me see whether I can do them justice.

If you could change peoples' minds on one thing, what would it be? I.e. what do you find the most frustrating/pernicious/widespread mistake on this topic?

Three important things come to mind:

1. There seems to be this common misconception that if you hold a suffering-focused view, then you will, or at least you should, endorse forms of violence that seem abhorrent to common sense. For example, you should consider it good when people get killed (because it prevents future suffering fo... (read more)

By longtermism I mean “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”

I want to clarify my thoughts around longtermism as an idea - and to understand better why some aspects of how it is used within EA make me uncomfortable despite my general support of the idea.

I'm doing a literature search but because this is primarily an EA concept that I'm familiar with from within EA I'm mostly familiar with work (e.g Nick Beadstead etc) advocates of this position. I'd ... (Read more)

Thanks Pablo and Ben. I already have tags below each argument for what I think it is arguing against. I do not plan on doing two separate posts as there are some arguments that are against longtermism and against the longtermist case for working to reduce existential risk. Each argument and its response are presented comprehensively, so the amount of space dedicated to each is based mostly on the amount of existing literature. And as noted in my comment above, I am excerpting responses to the arguments presented.

1AlasdairGives16hthat sounds fantastic. I'd love to read the draft once it is circulated for feedback

The fourth Workshop on Mechanism Design for Social Good is taking place this August. 

In addition to requesting papers and demonstrations, they are requesting "problem pitches" where people (say working in policy/NGOs) can submit a problem they have whose solution may involve mechanism design (a subfield of game theory). If accepted, it may interest academics working on these subjects. 

This might be a good opportunity to pitch some problems related to EA. Perhaps related to

  1. Donor coordination.
  2. Impact prize.
  3. Moral trade.

I'm sure that there are many more concrete examples within specific o

... (Read more)
Load More