Would you like to merge this question into it? MERGE already exists as an alternate of this question. Would you like to make it the primary and merge this question into it? MERGE exists and is an alternate of.
I tried to fight it off, saying I was totally unqualified to go to any AI-related conference. On the trip from San Francisco airport, my girlfriend and I shared a car with two computer science professors, the inventor of Ethereum, and a UN chemical weapons inspector.
The rest of the conference was even more interesting than that. I spent the first night completely star-struck. Oh, those are the people who made AlphaGo. This might have left me a little tongue-tied. How do you introduce yourself to eg David Chalmers?
But here are some general impressions I got from the talks and participants: In part the conference was a coming-out party for AI safety research. The conference seemed like a wildly successful effort to contribute to the ongoing normalization of the subject.
Offer people free food to spend a few days talking about autonomous weapons and biased algorithms and the menace of AlphaGo stealing jobs from hard-working human Go players, then sandwich an afternoon on superintelligence into the middle.
Everyone could tell their friends they were going to hear about the poor unemployed Go players, and protest that they were only listening to Elon Musk talk about superintelligence because they happened to be in the area.
Then people talked about all of the lucrative grants they had gotten in the area. It did a great job of creating common knowledge that everyone agreed AI goal alignment research was valuable, in a way not entirely constrained by whether any such agreement actually existed. Most of the economists there seemed pretty convinced that technological unemployment was real, important, and happening already.
We estimate large and robust negative effects of robots on employment and wages.
We show that commuting zones most affected by robots in the post era were on similar trends to others beforeand that the impact of robots is distinct and only weakly correlated with the prevalence of routine jobs, the impact of imports from China, and overall capital utilization.
According to our estimates, each additional robot reduces employment by about seven workers, and one new robot per thousand workers reduces wages by 1.
Globalisation for me seems to be not first-order harm and I find it very hard not to think about the billion people who have been dragged out of poverty as a result.
It looks like economists are uncertain but lean towards supporting the theory, which really surprised me. I thought people were still talking about the Luddite fallacy and how it was impossible for new technology to increase unemployment because something something sewing machines something entire history of 19th and 20th centuries.
I had heard the horse used as a counterexample to this before — ie the invention of the car put horses out of work, full stop, and now there are fewer of them.
An economist at the conference added some meat to this story — the invention of the stirrup which increased horse efficiency and the railroad which displaced the horse for long-range trips increased the number of horses, but the invention of the car decreased it.
This suggests that some kind of innovations might complement human labor and others replace it. So a pessimist could argue that the sewing machine or whichever other past innovation was more like the stirrup, but modern AIs will be more like the car. A lot of people there were really optimistic that the solution to technological unemployment was to teach unemployed West Virginia truck drivers to code so they could participate in the AI revolution.
I used to think this was a weird straw man occasionally trotted out by Freddie deBoer, but all these top economists were super enthusiastic about old white guys whose mill has fallen on hard times founding the next generation of nimble tech startups.
The cutting edge in AI goal alignment research is the idea of inverse reinforcement learning. Presumably this is solvable if we assume that our moral statements are also behavior worth learning from.
A more complicated problem: Formalizing what exactly humans do have and what exactly it means to approximate that thing might turn out to be an important problem here. Such an AI might try to learn things, and if the expected reward was high enough it might try to take actions in the world.
This sort of AI also might not wirehead — it would have no reason to think that wireheading was the best way to learn about and fulfill human values. The technical people at the conference seemed to think this idea of uncertainty about reward was technically possible, but would require a ground-up reimagining of reinforcement learning.a is greater than b.
The `` signs define what is known as the sense of the inequality (indicated by the direction of the sign). Two inequalities are said to have (a) the same sense if the signs of inequality point in the same direction; and (b) the opposite sense if the signs of inequality. Given declining marginal benefits from the pollution-generating activity, and rising marginal costs from pollution, the socially efficient level of the activity is given by the familiar condition equating marginal benefit to marginal cost.
A DISSERTATION ON THE ORIGIN AND FOUNDATION OF THE INEQUALITY OF MANKIND. IT is of man that I have to speak; and the question I am investigating shows me that it is to men that I must address myself: for questions of this sort are not asked by those who are afraid to honour truth.
I shall then confidently uphold the cause of humanity before the wise men who invite me to do so, and shall . The goal of Sudoku is to fill in a 9×9 grid with digits so that each column, row, and 3×3 section contain the numbers between 1 to 9. At the beginning of the game, .
This simple trust-building exercise works best with groups of people. If you have more than 10 people, you can either ask for 10 volunteers to participate while the rest observe silently or divide everyone into small groups of and conduct the exercise with one group at a time.
A growing body of research is giving us new ways to quantify the harms of bigness and the benefits of local ownership. In this post, we round-up the important studies and provide the evidence that policymakers can use to craft better laws, business owners can use to rally support, and citizens can use to organize their communities.