Campbell Ramble

Campbell Ramble

Share this post

Campbell Ramble
Campbell Ramble
Why eating meat and worrying about AGI is morally inconsistent.
Copy link
Facebook
Email
Notes
More
User's avatar
Discover more from Campbell Ramble
Occasional observations about markets, data, and the broader world.
Over 4,000 subscribers
Already have an account? Sign in

Why eating meat and worrying about AGI is morally inconsistent.

Or how I learned to stop worrying and start making smarter machines.

Alexander Campbell's avatar
Alexander Campbell
Oct 24, 2022

Share this post

Campbell Ramble
Campbell Ramble
Why eating meat and worrying about AGI is morally inconsistent.
Copy link
Facebook
Email
Notes
More
Share

Take a minute to ask yourself, “why is it moral for me to eat another animal?”

Most arguments for eating meat (or wearing fur etc.) boil down to 'we are smarter than them, thus humans aught derive radically more moral weight in society than animals.

Thanks for reading Campbell Ramble! Subscribe for free to receive new posts and support my work.

Image

This argument also extends to why we don't give children or all humans full legal rights.

Image

Stands to reason, that any AGI capable of catching up to human intelligence will likely surpass it.

Meaning, if there is AGI, it will be smarter than us.

Image

But recall, if you eat meat, by your own moral framework, smarter things deserve more rights!

If it’s ok to eat rabbits because they are dumber than you, on what basis do you deserve more moral consideration than a robot with 100x your intellectual capacity?

Thus - by most people’s own moral calculus - any AGI with more intellectual capacity than humans actually deserves more moral weight than the humans who make it.

Image

This is where @willmacaskill's arguments about the future come in.

By his logic, if you care about all humans, you aught to also care about all the humans ‘yet to be born.’

Utility in the Future ~ Utility Today

Combining this moral framework of “longtermism” with a) the likely path ahead for intelligent machines, and b) our existing moral frameworks linking rights to intellectual capacity leads to some interesting results…

In particular, you are left with the following, potentially counter-intuitive conclusions.

  1. The AGIs developed in the future only aught have moral weight, but potentially more moral weight than the humans who create it.

  2. Intelligent beings (inclusive of both humans and machines) in the future deserve moral weight today.

  3. These two principles interact. Meaning, if you eat meat, not only aught you not be afraid of intelligent machines, but you may actually inherit a responsibility for bringing them into being.

In the future, it’s build robots, or be replaced by them.

There is no other way. And maybe, by our own moral thinking, that’s a good thing.

Happy weekend.

Note this piece was originally adapted from a speech I gave at the 2019 Altius conference at the University of Oxford. It was also posted earlier this summer as a thread on twitter. Follow us there for this kind of work as and when it comes out.

Disclaimers

Charts and graphs included in these materials are intended for educational purposes only and should not function as the sole basis for any investment decision.
THERE CAN BE NO ASSURANCE THAT ROSE TECHNOLOGY INVESTMENT OBJECTIVES WILL BE ACHIEVED OR THE INVESTMENT STRATEGIES WILL BE SUCCESSFUL. PAST RESULTS ARE NOT NECESSARILY INDICATIVE OF FUTURE RESULTS. AN INVESTMENT IN A FUND MANAGED BY ROSE INVOLVES A HIGH DEGREE OF RISK, INCLUDING THE RISK THAT THE ENTIRE AMOUNT INVESTED IS LOST. INTERESTED PROSPECTS MUST REFER TO A FUND’S CONFIDENTIAL OFFERING MEMORANDUM FOR A DISCUSSION OF ‘CERTAIN RISK FACTORS’ AND OTHER IMPORTANT INFORMATION.

Thanks for reading Campbell Ramble! Subscribe for free to receive new posts and support my work.

Share this post

Campbell Ramble
Campbell Ramble
Why eating meat and worrying about AGI is morally inconsistent.
Copy link
Facebook
Email
Notes
More
Share

Discussion about this post

User's avatar
Convert of Doom
Microstrategy and the dark arts of 'volatility arbitrage'
Dec 3, 2024 • 
Alexander Campbell
117

Share this post

Campbell Ramble
Campbell Ramble
Convert of Doom
Copy link
Facebook
Email
Notes
More
19
A Depression with Chinese Characteristics
Welcome to the Ugly Deleveraging
Sep 23, 2024 • 
Alexander Campbell
78

Share this post

Campbell Ramble
Campbell Ramble
A Depression with Chinese Characteristics
Copy link
Facebook
Email
Notes
More
11
Gold Goes West
The Bullion Market Breaks
Feb 17 • 
Alexander Campbell
47

Share this post

Campbell Ramble
Campbell Ramble
Gold Goes West
Copy link
Facebook
Email
Notes
More
4

Ready for more?

© 2025 Alexander Campbell
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More

Create your profile

User's avatar

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.