Current AI Models Seem Sufficient for Low-Risk, Beneficial AI

Cross posted from LessWrong

Note: Mostly opinion/speculation, hoping to encourage further discussion.

Summary: Current AI models have demonstrated amazing capabilities. It seems feasible to turn them into usable products without much additional scaling. These models are probably safe, and have the potential to do a lot of good. I tentatively argue that their deployment should be encouraged, with reservations about accelerating risky AI development.

Current AI has potential

It feels like there is a new AI breakthrough every week. Driven by advances in model architecture, training, and scaling, we now have models that can write code, answer questions, control robots, make pictures, produce songs, play Minecraft, and more.

Of course, a cool demo is far from a product that can scale to billions of people; but the barriers to commercialization seem surmountable. For example, though cost of a ChatGPT query is 7x that of a Google search, but it seems feasible to bring inference costs down by an order of magnitude. In fact, there are already several companies building products based on transformer models.

I see potential for existing AI to have a large positive impact. Here are just a few possibilities:

Digital Media Creation: Diffusion models and language models can already be used to generate text, images, audio, video, 3D objects, and virtual environments. These tools can be used to allow anyone to generate high-quality digital media and augment human creativity.

Search: With all of the digital content being produced using AI we will need a way to sift through to find things people like. Language models can describe various forms of media using text, compress those descriptions, and use semantic search to find content that a particular person would enjoy.

Education: Language models can act like a tutor, teaching children any topic at an appropriate level, producing educational materials, and creating Anki-style question sets to cement understanding. A similar approach can be used to get researchers up to speed on any topic.

These are just some of the possibilities commonly discussed today; I expect people to come up with even more ingenious uses for modern AI.

Current AI seems pretty safe

Despite their impressive capabilities, it seems unlikely that existing models could or would take over the world. So far, they (mostly) seem to do what they’re told. Even AI researchers with particularly short timelines don’t seem to believe that current AI’s pose an existential risk.

That being said, current models do pose some risks that deserve consideration, such as:

Intelligence explosion: It may be possible for existing language models to self-modify in order to become more intelligent. I am skeptical of this possibility, but it warrants some thought.

Infohazards and Bad Actors: Language models could provide bad actors with new ideas or generally increase their capabilities. Fortunately, the very same models can also assist good actors, but it’s unclear how current AI’s will change the balance. Companies supplying language model outputs as a service should take steps to prevent this failure mode.

Misinformation and Media Over-production: Generative models could produce a deluge of low-quality digital media or help produce misinformation. Methods to filter low-quality AI content will need to be developed. Improved search capabilities may counteract this problem, but once again it’s unclear how language models will shift the balance.

These risks deserve attention, especially since they mirror some of the risks we might expect from more capable AI’s.

Are these risks large enough that current AI will have net negative impact? I expect not. Alongside mitigation efforts, the potential benefits seem far larger than the downsides to me.

Careful deployment of current AI should be encouraged (with caveats)

If you buy the previous two arguments, then the careful deployment of current AI should be encouraged. Supporting or starting AI companies with a robustly positive impact and a high concern for safety seems like a good thing. This is especially true if the counterfactual AI company has little regard for safety.

Additionally, the continued success of small models might lead companies to eschew larger models, redirecting them towards safer products. On the other hand, proliferation of small models could also encourage investment into larger, riskier models. I’m uncertain about which effect is stronger, though I lean towards the former.

Research Areas for Reducing Sleep Need

Previously, I have emphasized the value of interventions to reduce sleep need. Here, I present a few research directions that hold promise for making sleep more efficient.

Pharmaceutical Targets

Orexin is a hormone that increases alertness and reduces the effects of sleep deprivation. Some short sleepers have a specific mutation to their orexin receptors; giving this same mutation to mice reproduces the short-sleep phenotype. On the other hand, a deficiency of orexin leads to narcolepsy. Finding agonists of orexin receptors seems promising.

Melatonin may reduce sleep need by about an hour, if used properly. This comes from anecdotal evidence from Gwern Branwen and others. This makes sense, because melatonin is one of the hormones that controls sleep/wake cycles. It would be interesting to run a trial to see how much this generalizes. Because melatonin is not patentable, companies have no incentive to run these trials.

Neuropeptide S has been linked to the genetics of short sleep as well. In addition to wakefulness, it also seems to reduce appetite and anxiety.

Mutations to the ADRB1 and mGluR1 receptors have been linked to familial natural short sleep, so agonists of these receptors may promote wakefulness. There is a lot of research on targets for ADRB1, but as far as I know none have looked at promoting wakefulness.

S-Adenosyl methionine has antidepressant effects and has the side effect of causing insomnia (which, in my mind, is just enhanced wakefulness at the wrong time). This is particularly interesting given that many long-sleep disorders may be caused by depression, and treating one may also help with the other. The link between depression and sleep is just beginning to be explored.

Other Treatments

Sleep hygiene may be sufficient to reduce total time spent sleeping. Current research doesn’t focus on how sleep habits can reduce sleep duration but this new focus may turn up interesting results.

Cognitive Behavioral Therapy can be used to address insomnia, back pain, and mental illness. It’s possible that CBT could also be used to increase sleep efficiency. Dealing with misperceptions about sleep quality may alleviate some of the symptoms of insomnia.

Transcranial magnetic stimulation and Transcranial direct current stimulation both involve non-invasively applying electromagnetic fields to the brain. It’s possible that these techniques could be used to modify a person’s circadian rhythm (similar to melatonin) or make their sleep more efficient.

More speculatively, gene therapy would be the most direct way to modify sleep traits. We already know of several mutations that lead to natural short sleep, and it may be feasible to give a healthy individual these traits. The safety risks are simply too high for this line of research to be sensible, but clinical trials for people afflicted by long-sleep syndromes may have a better risk/benefit tradeoff.

Sleep duration shows a strong age dependence with babies and teenagers needing more sleep and the elderly needing less sleep. What is the source of these changes over a lifetime?

Links #33

How To Terraform Mars – WITH LASERS

Creating A Martian Cryptocurrency fleshes out the idea for funding a Mars colony using future land value tax revenue. This piece was written before Georgism in Space where I suggested something similar.

Found a thoughtful Critique of Georgism by Paul Birch. There’s other interesting stuff archived there as well.

Megalopolis: how coastal west Africa will shape the coming century

Twitter thread of interesting economic time series pointed me to this paper. Apparently the “child penalty” (professional income lost by women due to child rearing) has been falling for decades. This strengthens the hypothesis that housing costs are driving fertility declines.

Alice Evans thread on low male labor force participation in the U.S. Lots of important questions that need more research.

Why didn’t we get the four-hour workday?

New paper on how people have ” … a pervasive tendency, across ideological and demographic categories, to see things as getting worse than they really are.”

Social media as an incubator of personality and behavioral psychopathology

Eli Parra on making better spreadsheets

Scott Aaronsons “Introduction to Quantum Information Science
Lecture Notes” part 1 and part 2 are excellent. Combined with Quantum Computing Since Democritus you can pretty quickly get up to speed just by trying to follow the informal explanations.

Completely Device Independent Quantum Key Distribution

Matthew Green explains One-Time Programs. A fascinating cryptographic primitive we might be able to implement with a small amount of simple hardware.

How to… use AI to generate ideas

Transformers from Scratch

How important are accurate AI timelines for the optimal spending schedule on AI risk interventions?

Thread on the DART asteroid mission which transferred 3.6 times the spacecraft’s momentum to the asteroid, likely due to ejected material from the asteroid. One could imagine optimizing the impactor for this property by firing a slug near it’s boiling point at the asteroid. After impact the slug would boil off and take some of the asteroid with it, creating a strong push.

Related: Terminal Planetary Defense

Lithium-Air batteries seem promising. They have theoretical specific energy close to gasoline. Lithium-air batteries with 5x the density of standard Li-ion batteries have been demonstrated, which is competitive with the practical efficiency of gasoline. More energy-dense batteries can open up new opportunities for electric vehicles and airplanes.

Maturation and circuit integration of transplanted human cortical organoids. Might it be possible to add more neurons to parts of a healthy human brain to treat illnesses and increase intelligence?

Interesting talks from Foresight Institute:

Atomically Precise Catalysts

Nanoscale Instruments for Visualizing Small Proteins

Software-Facilitated Design Atomically Precise Manufacture

Links #32

Lab-grown blood given to people in world-first clinical trial

Excellent whitepaper reviewing fertility decline, technologies, policy solutions, and research projects.

Box 1 of this paper suggests that STI’s might account for millions of cases of infertility, while syphilis may cause upwards of 300,000 perinatal deaths per year. For reference, malaria causes 500,000 deaths per year. Syphilis and other STI’s seem like an important problem.

The molecular mechanism of natural short sleep: A path towards understanding why we need to sleep

Neuropeptide S is implicated in natural short sleep and promotes wakefulness, may be a useful target to make sleep more efficient.

The RNA/Protein Symmetry Hypothesis: Experimental Support for Reverse
Translation of Primitive Proteins
. Reverse translation (conversion of proteins back into RNA) would be pretty interesting because you could convert proteins in to something that can be amplified and sequenced, enhancing the information we get from single cell sequencing. It might also provide a new way to introduce genetic information into cells. In theory, the ribosome should be reversible, but I haven’t seen much work beyond this paper.

In Defense of Missile Defense. Everyone should build missile defense systems. Combined with effective treaties to limit proliferation, would mitigate the risks of nuclear weapons a lot.

Hypervelocity Tether Rockets

The Megastructure Compendium

Transforming the Future of Marine Aquaculture: A Circular Economy Approach

Optimism: Why it’s worth trying to make the world more optimistic

The end of software

Site devoted to crowdsourcing AI safety ideas and projects.

Excellent model of optimal AI-risk spending that considers timelines, alignment difficulty, and influence.

I like Hyperstructures vision of self-perpetuating, fair, and positive sum protocols. If you take away the crypto aspects, this frame also fits markets, governments, communities, and movements.

Helium seems to be a crypto protocol that perpetuates internet access.

Golden seems to be a protocol for organizing the worlds knowledge.

Some posts I liked from Economic Forces:

Monopolies don’t make enough money

Why Do Firms Merge?

Does Anyone Understand Externalities?

Markets Must Become Competitive

Links #31

On Being Rich-ish, a followup to On The Experience of Being Poor-ish.

Peaceful nuclear explosions

The problem of simulator evil

ethics and anthropics of homomorphically encrypted computations. An interesting ethical problem.

This Security Engineering textbook is very thorough. Also from that author: a paper and blog post about a system to allow whistleblowers to securely contact news organizations.

Should there be demand-based recurring fees on ENS domains? Some of the ideas here could apply to a land value tax more broadly.

What if we could automate invention?

Deterministic Lateral Displacement is technique to sort particles of different sizes in a microfluidic channel. It seems like there are a lot of ways this could be improved by changing spacing along the pathway, combining with techniques like dielectrophoresis, acoustic tweezers, optical tweezers, and localized characterization techniques. You could imagine sorting and recombining particles based on their properties, allowing you to conduct many small-scale chemistry and biology experiments.

Room-temperature catalyst-free methane chlorination. Seems like a useful reaction. Activating methane and converting it into something that is easier to transport is valuable.

Aging is already solved in vitro. What comes next?

Things that Increase Female Reproductive Lifespan

The Cost of Raising a Child. The time-price of having kids has been mostly flat for the last 50 years. Seems like career opportunity cost and housing costs are better explanations for falling fertility. Its straightforward to address childcare and housing costs, but the opportunity cost of children will keep rising as incomes increase.

Are c-sections underrated?

Isaac Asimov Asks, “How Do People Get New Ideas?”

Nickey Case’s Nutshell is a neat tool to make “expandable, embeddable explanations” within a website and borrow them from others websites. Very cool!

Related: Subconscious

Related: Fermat.ws

Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks

How Open Source Machine Learning Software Shapes AI. This kind of research is critical for SatisfAI which requires influencing AI development via targeted development.

How Software in the Life Sciences Actually Works

Book review: Barriers to Bioweapons. Seems like bioweapons can be made infeasible by stopping public gain-of-function research, increasing surveillance, and making it known how difficult such weapons are to build.

From that post: Digital-to-biological converter for on-demand production of biologics

Orexin and the quest for more waking hours

Cardinality and Utilitarianism through social interactions. Cool paper showing how you can obtain interpersonal comparisons of utility using social interactions. Similar to an idea I proposed recently.

Topics in Economic Theory and Global Prioritization by Philip Trammel is a very thorough introduction to the economics of philanthropy, global priorities, and EA topics.

The Economic and Fiscal Effects on the United States from Reduced Numbers of Refugees and Asylum Seekers. Asylum seekers are a big money maker, if only we could figure out how to make it politically feasible to accept more of them.

Links #30

How to terraform Mars for $10b in 10 years

Let’s Terraform West Texas

Thermodynamic feasibility analysis of distributed chemical looping ammonia synthesis

Manganese Nodules could be useful sources of metals if the prices get high enough.

Storing Freshwater In The Salty Sea

Circe is a company making triglycerides (fat) from CO2. Food from air!

FIND-IT: Accelerated trait development for a green evolution. A plant breeding method that leverages improvements in sequencing tech to identify high performing mutants. This would pair well with mutation breeding to increase the variance and diversity of natural mutations.

A Future History of Biomedical Progress

Bioadhesive ultrasound for long-term continuous imaging of diverse organs. Could something like this also be used for therapeutic ultrasound?

There has been interest in creating cheap UV-C lighting and air filters to sanitize air at a massive scale and prevent the spread of diseases. But what about sanitizing surfaces? Anti-microbial copper seems like a good solution, and the wikipedia page points to two companies that produce a long-lasting spray-on copper coating. What is preventing us from applying antimicrobial coatings to every high-touch surface? Could UV-C lighting also disinfect surfaces? Are there anti-microbial polymers or coatings that could be added to non-metal surfaces? Titanium dioxide also has disinfectant properties when interacting with UV light. Combined with alcohol hand-sanitizer and citizen education, surface transmission of microbes could be dramatically reduced.

Transnational Surrogacy as a cause area

Naval Gazing isn’t worried about nuclear winter. New paper on the food impacts of a nuclear war suggests 2B deaths total (their headline of 5B deaths assumes an unreasonable amount of soot released). Luisa Rodriguez’s research also suggests that nuclear winter is unlikely. Nuclear war remains a big catastrophic risk, but is unlikely to be an existential risk.

Discovering Agents. These kind of results are important for an AI influence approach.

Investing in Infants: The Lasting Effects of Cash Transfers to New Families

Can a Stablecoin Be Collateralized by a Fully Decentralized, Physical Asset? It looks like it is physically possible to transfer useful work purely via communication, which makes it possible to build a digital asset backed by useful energy.

The Physical Basis of Personal Identity

Everyone has a unique set of defining characteristics. These traits are robust; they seem to stay with us over the years, surviving different circumstances, experiences, and relationships.

(For the remainder of this post, I will use “personal identity” to refer to the bundle of traits, personality, and habits of mind that make us unique and recognizable to others. Some would refer to it as your “mind” or “consciousness” or “ego”, but I want to avoid using such loaded terms. EDIT: I want to emphasize the component of identity that makes you recognizable to others. This seems under-appreciated relative to internal experience.)

I’m not so interested in talking about the particular features of personal identity, instead, I want to think about where it literally resides. What encodes these important features of our personality? Where do our quirks come from?

Any theory on the basis of personal identity has to explain an array of facts:

  1. Personal identity seems to persist over decades. There is something unique and recognizable about each person which is remarkably consistent.
  2. Neurotransmitters and electrical activity vary over the course of seconds. Sleep and psychoactive drugs can change neural activity significantly. Anesthesia can significantly lower or shut off [1] neural activity. Deep brain stimulation can have dramatic effects which are reversed once stimulation stops.
  3. The connections between neurons are more persistent than electrical activity, but still change a lot. Memory formation, learning, and sleep are known to change neural pathways. Some drugs like SSRI’s and psychedelics may also change synaptic connections. In general, neural processing remains robust despite significant “representational drift” [2].
  4. Studies estimate 1-15% of neurons are active at any one time [3]. Neurons are inconsistent about firing, responding to stimuli ~17% of the time [4].
  5. People can suffer significant brain trauma without a change in their personal identity. Consider concussions, hydrocephalus, split-brain patients, neurodegenerative diseases, and case-studies of traumatic brain injuries [5].
  6. Genes seem to have a strong influence on personal identity despite only a few genes being known to affect neural development. The information in these genes is not enough to encode the details of an entire brain. Mental illnesses are one class of partially genetic traits that have an effect on personality.
  7. Animals seem to have some sort of personal identity. The basic biology of their brains is almost identical to humans, though humans typically have more neurons and synapses (though some animals have more neurons, synapses, cortex neurons, and more neurons per body mass).

I think these points can rule out a couple possibilities.

For example, 1 seems incompatible with 2. The difference in timescale suggests that there is something else orchestrating neural activity over long time periods to produce personal identity. So electrical activity and neurotransmitters probably aren’t what we’re looking for.

Neural connections seem like a natural candidate, but this is unsatisfying because of the examples in 3-5. Individual neural connections are getting added and subtracted all the time with no apparent changes in personal identity. Even the connections that persist are only occasionally active. There are even cases where personal identity remains intact despite losing large sections of brain matter or experiencing significant reductions in connectivity.

Perhaps identity is some global, “emergent” property of your neural connections? Points 5-7 would argue against this. Personal identity seems to have some relatively simple source. Split-brain patients survive through a reduction in connectivity, suggesting personal identity does not depend on the cooperation of the entire brain. Animals seem to have a personality despite having a simpler neural architecture and fewer neurons, which indicates that identity can be formed on a much simpler substrate. The strong influence of a few genes carrying relatively little information suggests that personal identity might have only a few parameters that combine to produce different identities. All of this suggests that personal identity is stored in a relatively simple structure.

This would explain why personality tests can assign people to a few categories yet predict a lot about someone. Perhaps this is why we can quickly get a sense of someone’s character after only a short period.

So where does that leave us? My best guess is that personal identity is stored somewhere between the “individual synapse” level and the “whole brain emergence” level. It should be a semi-localized structure involving bundles of neural connections that persists over long periods.

This seems like it could create continuity through the years without being so sophisticated that it falls apart in cases of brain trauma.

The relative simplicity of this structure would explain why other animals seem to have personal identity, why a handful of genes have a strong influence, and why people sort into a small set of personality types.

This has interesting parallels with this paper [6] suggesting that stable electric fields in the brain communicate information despite constant turnover of the underlying synapses.

This has implications for brain emulations. If personal identity is simpler than we thought, it could be much easier to upload someone’s mind. Only a few substructures would need to be changed to produce a completely different person. This would mean that uploading a new mind would require a relatively small number of personal details [7].

But it’s hard to have a lot of confidence in these claims; there’s a good chance that I misinterpreted the evidence. This line of reasoning could fall apart with a more rigorous definition of “personal identity”.

We will have to wait for more research on how the brain processes information despite representational drift, searching for simple, robust architectures that can store our identity over long periods.

Notes
  1. Hope is found even in flat-lined EEG
  2. Some reviews on representational drift:
    The sloppy relationship between neural circuit structure and function
    Causes and consequences of representational drift
    Representational drift: Emerging theories for continual learning and experimental future directions
  3. The Cost of Cortical Computation
    An Energy Budget for Signaling in the Grey Matter of the Brain
  4. Fluorescent false neurotransmitter reveals functionally silent dopamine vesicle clusters in the striatum
  5. Some interesting cases:
    Phineas Gage
    Ahad Israfil
    Anatoli Bugorski
    Lev Zasetsky
    Howard Dully
    Wenceslao Moguel
    Brain of a white-collar worker
  6. Beyond dimension reduction: Stable electric fields emerge from and allow representational drift
  7. It could also be easier to uplift animals than we thought. It might require only a few updates to produce Human-like minds from other species.

Approaches to Interpersonal Comparison of Utilities

One difficulty with utilitarianism is “interpersonal comparison of utilities”. It’s hard to relate changes in different people’s utility. If we took an apple from Alice and gave it to Bob, is there any way to compare Alice’s sadness to Bob’s happiness? It seems like there’s no principled method to compare these feelings.

Though people feel emotions with different intensities, it would be hard to quantify this in the way that utilitarianism demands. We have no way of knowing exactly how people feel on the inside.

But, I think we can sense the intensity of someone’s emotions in real life. We use this sense all the time when making decisions as a group. If there we can compare utilities in some practical sense, we might find a way to compare them more systematically.

I want to suggest some approaches to comparing peoples utilities. The goal of this comparison is to derive “weights” to apply to each individuals utility function to allow us to compare across individuals. These weights can be used to aggregate utilities, which is important for collective decision-making.

The simplest approaches to interpersonal comparison of utilities involve some form of “normalization”. That is, finding some way to scale the utilities of one person in order to compare them to the utilities of another person. One common type is called “range normalization” where you divide the utility of each option by the range between the highest possible and lowest possible values. For example, you could normalize everyone’s utility to be in the range from 0 to 1. This is nice because it puts utilities on a common scale.

Another form of normalization involves scaling utilities based on the minimum perceivable differences of each agent. Put another way, agents utilities are scaled based on how sensitive they are to certain changes.

One issue with these previous normalization schemes is that they are not strategy-proof. That is, agents may have an incentive to lie about their preferences when making a group decision. However, variance normalization is a different approach to which can be strategy-proof under some circumstances.

There are other normalization schemes that have been proposed, but I want to offer some new approaches to interpersonal comparison.

Instead of deriving explicit weights for each person, we might also consider looser heuristics which bound the possible weights we can give to each agent.

For example, we could require that at least one world-state for agent A is at least as valuable as another world state for agent B. This then limits our weighting scheme to values where A and B’s ranges overlap. Alternatively one could align the center of each persons utility range, requiring that the medians be equal.

We can also place other heuristic requirements on people’s utility functions. We can enforce a rule that similar agents with similar preference orderings should have similar utility values on each world-state. For example, the effect of receiving a dollar should be roughly the same for two wealthy people.

Another approach is to use the existing social network to pin down interpersonal comparisons of utility. If you can determine how friends conduct interpersonal comparisons of utility, you can use social connections to infer interpersonal comparisons between unrelated individuals. For example, if Alice’s utility weighed twice as much as Bob’s, and Bob’s utility weighed twice as much as Charlie’s, then we might infer that Alice’s utility weighs 4 times as much as Charlie’s.

How can we get friends to reveal how they weigh each others utility? Well, we can ask them to make decisions as a pair and observe how they collectively weigh each person’s utility in these decisions. Next, pair them up with new friends, repeating the process until all individuals are part of a connected social graph. The relative weight of any two individuals can be determined by finding a path between them on the social graph and multiplying the weights of each connection to determine their relative weighting.

Another interesting idea is to update weights dynamically. If the weights were “fair”, no agent would consistently gain utility at the expense of others. We can try to enforce this principle by changing an agent’s weighting proportional to how much their utility has changed recently. In the long term, these weights should settle to some equilibrium where each individuals utility increases about equally. This is reminiscent of Andrew Critch’s approach to utilitarian decision-making between Bayesian agents with differing priors.

Another approach is to give higher weight to individuals who have done the most to boost the group’s utility. This rewards individuals who do good on behalf of other. This has parallels to a free market, where individuals are paid based on services they do for others, and accrue power proportional to their pay.

It seems that, in absence of strategic considerations, there are plenty of viable ways to inter-personally compare utilities. However, finding a strategy-proof scheme is much harder. In fact, with a wide enough domain of preferences, collective, fair voting is impossible!

However, I think weaker notions of strategyproofness could be used for practical systems that are immune to manipulation. More work needs to be done to define and test these systems. If successful, they could provide a rigorous basis for collective choice.

Precise P(doom) isn’t very important for prioritization or strategy

Cross-posting something I wrote on LW

People spend time trying to determine the probability that AI will become an existential risk, sometimes referred to as P(doom).

One point that I think gets missed in this discussion is that a precise estimate of P(doom) isn’t that important for prioritization or strategy.

I think it’s plausible that P(doom) is greater than 1%. For prioritization, even a 1% chance of existential catastrophe from AI this century would be sufficient to make AI the most important existential risk. The probability of existential catastrophe from nuclear war, pandemics, and other catastrophes seems lower than 1%. Identifying exactly where P(doom) lies in the 1%-99% range doesn’t change priorities much.

Like AI timelines, its unclear that changing P(doom) would change our strategy towards alignment. Changing P(doom) shouldn’t dramatically change which projects we focus on, since we probably need to try as many things as possible, and quickly. I don’t think the list of projects or the resources we dedicate to them would change much in the 1% or 99% worlds. Are there any projects that you would robustly exclude from consideration if P(doom) was 1-10% but include if P(doom) was 90-99% (and vice versa)?

I think communicating P(doom) can be useful for other reasons like assessing progress or getting a sense of someone’s priors, but it doesn’t seem that important overall.

Black Hole Civilizations

The future is bright?

Despite being known for their absence of observable features, black holes have some amazing properties:

Matter-energy conversion: Matter that goes into a black hole eventually comes back out as Hawking radiation. This should allow you to convert solid matter into energy.

Energy storage: Black holes can store energy for extremely long periods of time because of how slowly Hawking radiation is emitted.

Weaponry: Due to the high energy density and the whole spaghettification thing, black holes might make powerful weapons.

Thrust: A small black hole can emit a significant amount of Hawking radiation, which can be used for thrust. This makes black holes an excellent fuel for starships.

Time-travel: Orbiting close to a massive black hole can cause significant time dilation, slowing you down relative to a distant observer. You can use this to time-travel into the future.

Cryptography: Black holes seem to scramble information that is placed inside of them in an unpredictable way. Its possible that this scrambling could be used to create psudorandom number generators and other cryptographic primitives.

Computation: Black holes may constitute an ideal quantum-gravitational computing device. The time-dilation from being near a black hole can give you an additional speedup. In fact, if you are willing to make the one-way trip inside of a black hole, you may be able to receive solutions to hard problems (See also: Malament-Hogarth spacetime).

Communication: Black holes create extraordinarily bright light and gravitational waves, which might be used for high-fidelity communication across the cosmos. Gravitational lensing could allow for extreme magnification of incoming light.

Currency: Black holes only have 3 properties: mass, charge, and spin, which are easy to verify. Given how useful and long-lived they are, they are an excellent store of value, this means that black holes could be exchanged as a physical currency. The mass constitutes a built-in accounting tool, and since they provide excellent fuel for interstellar flight, black holes are ideal objects to transport across space.

Unknowns: There’s so much we don’t know about black holes. Unforeseen factors may create new possibilities. For example, the lack of properties beyond mass, charge, and spin makes black holes look like fundamental particles. Is it possible to build new particle physics or condensed matter physics using black holes? Might the interior of a black hole offer a new universe to explore? Could black holes be used to harness dark matter and energy?

Civilizations that travel to black holes will learn new physics, build spectacular technologies, and harness enormous amounts of energy. Their amazing properties suggest that black holes may form the basis of interstellar society. Let’s hope we get there.