A peer-reviewed electronic journal published by the Institute for Ethics and
Emerging Technologies

ISSN 1541-0099

contents

call for papers

editorial board

how to submit to JET

support JET & IEET

search JET

 

 

[wta-talk] Citizen Cyborg on If Uploads Come First

Robin Hanson Wed Mar 29 23:44:15 BST 2006

I learned last night that pages 169-170 of James Hughes' book 
"Citizen Cyborg" discusses my paper "If Uploads Come First" 
(http://hanson.gmu.edu/uploads.html).    In that section Hughes 
severely misrepresents my positions.  He paints me as gleefully 
advocating having a ruthless pampered ruling class, "not very 
different from contemporary luxury suburbs," being "set off from a 
larger terrain of violence and injustice" among downtrodden 
masses.   I am posting a public response here, to a list I know that he reads.

 From Citizen Cyborg (excerpted 
at:  http://hanson.gmu.edu/PAM/press/CitizenCyborg-excerpt.txt):
>The extropians have also cultivated important allies in libertarian 
>politics such as Virginia Postrel and Ron Bailey, sympathizers with 
>their militant defense of personal liberty and hostility to 
>regulation and environmentalism.  ... Postrel has now organized 
>Bailey and other technolibertarians, ... into The Franklin 
>Society.  The first project of the Society has been to campaign 
>against attempts to ban embryonic stem cell research.   In 2003, one 
>member of the new Franklin Society, extropian economist Robin 
>Hanson, a professor at George Mason University, achieved his full 
>fifteen minutes of fame.  ... While I think the experiment had merit 
>and would not have encouraged terrorism, the episode does illustrate 
>some of the moral and political blindness that the unreformed 
>extropian anarcho-capitalist perspective lends itself to.

Putting me in this context suggests that I have a "militant defense 
of personal liberty and hostility to regulation and environmentalism" 
and that I am an "unreformed extropian anarcho-capitalist".   While I 
have long associated with people under the flag "extropian" (via 
mailing lists, conferences, and journals), I deny these other claims.

In 2002 I agreed to sign a petition saying "therapeutic cloning 
should not be banned," sponsored under the name "Franklin Society," 
but I otherwise have no knowledge of or association of such a 
society.   I presume that James would also have signed such a 
petition at the time.

The Policy Analysis Market 
(http://hanson.gmu.edu/policyanalysismarket.html) was a joint project 
of many people, and I was the only such person with any "extropian" 
associations.  Other people on the project were more directly 
responsible for the web page examples that caused the furor; those 
people can reasonably be blamed for "political blindness," though not 
in my opinion for "moral blindness."

>... he published a now often-cited essay "If Uploads Come First - 
>the Crack of a Future Dawn" in Extropy magazine.   ...  He argues 
>that the capabilities of machine-based person would be so much 
>greater than those of organic humans that most non-uploaded people 
>would become unemployed.

My main argument was that uploads will *cost* much less, not that 
they would be more capable.

>... Eventually the enormous population of uploads would be forced to 
>work at very low subsistence wages - the cost of their electricity 
>and disk space - ruled over by a very few of the most successful of 
>the uploads.

I say nothing about people being ruled over by a successful 
elite.   I talk disapprovingly about wealth inequality among humans, 
caused by some humans not insuring against an upload transition.  I 
talk about inequalities in the number of copies made of particular 
uploads, but I do not speak at all about wealth inequalities among uploads.

>Hanson dismisses the idea that governments could impose 
>redistribution on uploads since there would be large economic 
>benefits of an unfettered transition to Matrix life.

The only thing I say about government redistribution is this:
>politicians would do better to tax uploads and copies, rather than 
>forbidding them, and give the proceeds to those who would otherwise 
>lose out. 14{Note that such a tax would be a tax on the poor, paid 
>to the relatively rich, if one counted per upload copy.}

This is hardly a dismissal of redistribution.  Nor is my claim one I 
think James would disagree with.

Returning to Citizen Cyborg:
>The average quality of life of the subsistence upload and the 
>unemployed human would allegedly be higher than before.  So the best 
>we future residents of an uploaded society can do is become as 
>versatile as possible to maximize our chances of ending up as one of 
>the lucky rule or employed classes.

The first sentence here is a reasonable summary of my position.  But 
the second sentence here does not at all follow from the first, and I 
said nothing like it in my paper.

>  Hanson dismisses the idea that people will fight the division of 
> society into a mass of well-fed plebes and a superpowerful elite 
> since the growth in the gross domestic produce is the sole measure 
> of his utopia,

I never mentioned anything like "gross domestic produce" and so 
certainly didn't endorse it as a "sole measure" of value.   The 
division I talk most about is humans and uploads, not a "well-fed 
plebes and a superpowerful elite," and to the extent I take sides it 
is with the uploads, who are poorer.

>and the elimination of the weak will select for "capable people 
>willing to work for low wages, who value life even when life is 
>hard."   With a dismal, elitist utopia like this who needs a 
>Luddites's dystopia?

My paper was mainly a *positive*, not a *normative* analysis.   That 
is, I mainly tried to forecast what would happen under such a 
scenario, and only make limited comments along the way about what 
private actions or public policies we might prefer.   I tried not to 
shy away from describing the negatives along with the positives.

Even after all of Hughes' strong language, I'm not sure I can 
identify any particular claim I made in my paper that he would 
disagree with.   And while he favor redistribution, it is not at all 
clear to me who he wants to take from, and who to give to under the 
scenario I describe.   After all, given the three distinctions of 
human/upload, rich/poor, and few/many-copied, there are eight 
possible classes to consider.


Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323  




Martin Striz Thu Mar 30 01:16:59 BST 2006

On 3/29/06, Robin Hanson <rhanson at gmu.edu> wrote:

> >... he published a now often-cited essay "If Uploads Come First -
> >the Crack of a Future Dawn" in Extropy magazine.   ...  He argues
> >that the capabilities of machine-based person would be so much
> >greater than those of organic humans that most non-uploaded people
> >would become unemployed.
>
> My main argument was that uploads will *cost* much less, not that
> they would be more capable.
>
> >... Eventually the enormous population of uploads would be forced to
> >work at very low subsistence wages - the cost of their electricity
> >and disk space - ruled over by a very few of the most successful of
> >the uploads.

Wouldn't the smart thing for posthuman uploads be to design an army of
narrow AI drones to do all the labor that society requires?  For
example a "Waiter AI" could do a reasonable job of taking your dinner
order and bringing it out to you, while a "Cook AI" could do a
reasonable job of making it, but neither would have a repertoire of
cognitive capacities suggestive of a level of consciousness worthy of
autonomous rights (this example is assuming that there will still be
biological humans wanting to eat dinner).  Since the drones will
require nothing, their labor and therefore all goods and services will
essentially be free.  The only thing they'll require is the energy to
run, but even that will be produced through the labor of other drones.
 Etc etc.  For posthumans who are so smart, that would seem like the
reasonable thing to do.

Martin




Hughes, James J. Thu Mar 30 06:41:46 BST 2006

Thanks for taking the time to respond Robin.

And doing so in a comradely, academic exchange even though my
description of your views was polemical and, in your analysis,
incorrect. 

However, I just was in Oxford with you and saw you give another version
of this very "Crack of a Future Dawn" scenario in which you did not
mention any regulatory or political solution possible to this scenario
of general unemployment that you foresee being created by a
proliferation of uploaded workers, so I don't think my analysis of your
views needs much revision. When a member of the audience asked, as I
have in the past, whether we might not want to use some kind of
political method to prevent general unemployment and wealth
concentration in this Singularitarian scenario your response was, as it
has been in the past and was in that paper, that no one will want to
prevent this coming to pass since we will all own stocks in this
explosive economy and will therefore all be better off than we were
before. 

In the essay you say:

"imagine the potential reaction against strong wage competition by
"machine-people" with strange values. Uploading might be forbidden, or
upload copying might be highly restricted or forbidden...If level heads
can be found, however, they should be told that if uploading and copying
are allowed, it is possible to make almost everyone better off. While an
upload transition might reduce the market value of ordinary people's
human capital, their training and ability to earn wages, it should
increase total wealth, the total market value of all capital, including
human capital of uploads and others, real estate, company stock, etc.
Thus it can potentially make each person better off."

I'll say again: I think the scenario is a scary one, in ways that you
don't appear to recognize, because most people have little confidence
that they would actually be better off in a world in which all "human
capital" is radically devalued by the proliferation of electronic
workers. That includes me; although I do own stocks in mutual funds
today, and those stocks might benefit from a Singularitarian economic
boom, I still feel like my world and my future is being determined by
unaccountable elites who control my political institutions, elites quite
content to see vast numbers of people immiserated as inequality grows.
The scenario you describe is one where it appears these inequalities of
wealth and power would just get a lot more extreme and far less
politically ameliorable.

If Singularitarianism wants to paint a truly attractive future, and not
one that simply fans the flames of Luddism, then it has to put equality
and social security in the foreground and not as a dismissive
afterthought. To his credit Moravec, in Robot, argues for a
universalization of Social Security as a response to human structural
unemployment caused by robot proliferation. Marshall Brain reached the
same conclusion, and several of the principals at the IEET and I are
supporters of the concept of a Basic Income Guarantee. But since this
would require state intervention I suspect you don't favor such a
proposal, which is why you advocate(d) minarchist solutions like
universal stock ownership in the Singularity.

Perhaps the most troubling parts of the essay are:

"As wages dropped, upload population growth would be highly selective,
selecting capable people willing to work for low wages, who value life
even when life is hard. Soon the dominant upload values would be those
of the few initial uploads with the most extreme values, willing to work
for the lowest wages."

And then later

"Those who might want to be one of the few highly copied uploads should
carefully consider whether their values and skills are appropriate. How
much do you value life when it is hard and alien?...Those who don't want
to be highly-copied uploads should get used to the idea of their
descendants becoming a declining fraction of total wealth and
population...."

How is this different from a radical Social Darwinism, arguing that this
Pac-man world will eliminate all the uppety prole uploads, the ones who
might want minimum wage laws or unions, and just leave the good hard
workers willing to work for subsistence?

You say:

> I talk disapprovingly about wealth inequality among humans, 
> caused by some humans not insuring against an upload transition. 

Which I assume refers to this passage, the only one that mentions
inequality in the essay:

"Would there be great inequality here, with some lucky few beating out
the just-as-qualified rest?...Computer technology should keep improving
even if work on uploading is delayed by politics, lowering the cost of
copying and the cost to run fast. Thus the early-adopter advantage would
increase the longer uploading is delayed; delaying uploading should
induce more, not less, inequality. So, if anything, one might prefer to
speed up progress on uploading technology, to help make an uploading
transition more equitable."

So yes, you did argue against inequality, but only in passing as one
reason why political regulation of a rapid transition to general
unemployment in an upload-dominated economy should not be hampered by
political regulation. If we try to slow this transition, a minority of
uploads will just become even richer. So we should speed the transition
to give more uploads a piece of the pie. 

But you are right that you do not explicitly describe a concentration of
wealth, only mention it as a possibility in order to discourage
regulation, and you do describe mechanisms that might spread wealth out
among the uploads and humans. But then how is that consistent with the
scenario "As wages dropped, upload population growth would be highly
selective, selecting capable people willing to work for low wages."
Doesn't that imply that humans would be unemployed, most uploads working
for upload-subsistence, and some very few uploads will be raking in the
big bucks? Or is the scenario one of truly universal and equal poverty
among all the uploads, with no wealthy owners of capital anymore in the
equation?

You note that we might progressively tax wealth accumulators in this
economy, but then in the last sentence of the paper's abstract you say:

"total wealth should rise, so we could all do better by accepting
uploads, or at worse taxing them, rather than trying to delay or
segregate them." 

And then later:

"If forced to act by their constituents, politicians would do better to
tax uploads and copies, rather than forbidding them, and give the
proceeds to those who would otherwise lose out."

Which pretty clearly implies that you only grudgingly accept Social
Security and redistributive taxes on uploaded wealth accumulators as a
concession to political unrest, and not as an obvious and essential step
in maintaining an egalitarian polity.

That said, the reason I devoted the attention to the essay that I did
was because that I think it is a very smart and foresightful scenario of
a future that could come to pass. But I do think the piece illuminates a
techno-libertarianism that most people will find scary, and which our
movement needs to contextualize in proactive social policies, precisely
in order to defend the possibility of uploading from bans. As you note,
in such a future I would recommend (fight for) redistribution from the
wealthy - uploads or human - to the majority, to ensure some form of
rough equality, and some form of Social Security more eglitarian and
universal than stock ownership, such as a Basic Income Guarantee. (Did
you have in mind the distribution of mutual fund shares to everyone in
the developed and developing world? If so, I think that would be a
welcome addition to the scenario.) 

And if the economy and world starts to change with the rapidity that you
forecasted at Oxford - doubling every couple of weeks, with a
proliferation of uploads - I would also favor strong regulatory action
to slow and temper the transition. A rapid take-off Singularity is both
dangerous and anti-democratic, and we should say so and say what kind of
policies we think are necessary to make sure it doesn't happen, and how
we can slow it down if it starts.  You don't really endorse
redistributive, Social Security or regulatory policies in the essay, but
rather argue against them, and you didn't even mention them at Oxford,
and clearly consider them suboptimal, counter-productive concessions to
Luddites. So I do think we have a difference of opinion that I have not
mischaracterized.

However, I apologize again for the polemical tone of the passage since
we are friends, and for not more fully describing your views. 

J.




BillK Thu Mar 30 09:12:07 BST 2006

On 3/30/06, Hughes, James J. wrote:
> However, I just was in Oxford with you and saw you give another version
> of this very "Crack of a Future Dawn" scenario in which you did not
> mention any regulatory or political solution possible to this scenario
> of general unemployment that you foresee being created by a
> proliferation of uploaded workers, so I don't think my analysis of your
> views needs much revision. When a member of the audience asked, as I
> have in the past, whether we might not want to use some kind of
> political method to prevent general unemployment and wealth
> concentration in this Singularitarian scenario your response was, as it
> has been in the past and was in that paper, that no one will want to
> prevent this coming to pass since we will all own stocks in this
> explosive economy and will therefore all be better off than we were
> before.
>
<snip>

I think this article has some relevance here.
<http://www.wired.com/wired/archive/14.04/gecon.html>

Why abundance sucks, and other unexpected lessons of the game economy.
By Edward Castronova

What if everything in life were free? You'd think we'd be happier. But
game designers know better: We'd be bored.

Economics is loosely defined as choice under scarcity. After all, in
the real world, there's only so much to go around. You can't always
get what you want, and unfulfilled desires give rise to markets. But
in a game world, there's no inherent reason for scarcity. Game
designers have given us plenty of utopias where we can have all the
mithril we want, to buy whatever we want whenever we want it. Problem
is, those worlds turn out to be dull.
etc........

-------------------------------------

So what will humans actually do in the future of plenty?

Research?  Your AI assistant can do that better than you.

Design something? You have a world of products available to you.

Art?  Everybody is an artist now.

Go travelling?  Why bother? Send a remote camera/audio to have a look for you.



Party with other humans?  Now you're talking!

Argue about politics in the human reservation? Probably.


BillK




Marc Geddes Thu Mar 30 09:19:32 BST 2006

--- "Hughes, James J." <james.hughes at trincoll.edu>
wrote:
 
> If Singularitarianism wants to paint a truly
> attractive future, and not
> one that simply fans the flames of Luddism, then it
> has to put equality
> and social security in the foreground and not as a
> dismissive
> afterthought. To his credit Moravec, in Robot,
> argues for a
> universalization of Social Security as a response to
> human structural
> unemployment caused by robot proliferation. Marshall
> Brain reached the
> same conclusion, and several of the principals at
> the IEET and I are
> supporters of the concept of a Basic Income
> Guarantee. But since this
> would require state intervention I suspect you don't
> favor such a
> proposal, which is why you advocate(d) minarchist
> solutions like
> universal stock ownership in the Singularity.
> 

Well, as you know Dr J, I started out a moderate
Libertarian but was eventually converted to your
view-point - turns out I was a closet left-liberal
after all.  It's now hard for me to believe that I
could ever have given Libertarianism any credence. 
There's just *so* much empirical evidence against it. 
 Also read your book, thought it was quite good. 
Dealt very well with the political side of things. 

Libertarianism deals in idealizations which bear
little relation to human nature as is (communism had
the same problem).  Mostly it takes a hugely
over-optimistic view of human nature - Libertarians
like to believe that they're super-competent and
super-rational - that their winners.  But then I
realized - hey wait a minute - what if I'm not one of
the winners?  What if I'm not a super-genius?   

Like me...thinking I could take on Eliezer and code up
Singularity in a couple of weeks...

I kept reaching for the math skills to blast 'em (the
SL4 crowd), but where my math skills should have been
there was just a hole :-(  

I kept reaching and reaching for the math to blast all
the world's baddies but the post-human skills are just
not there :-(  It's a horrible horrible feeling.

We all want to be X-Men - it's a really cool fantasy -
unfortunately for most of us - the reality is pretty
dire - we're weak and stupid :-(

After a few years of intense effort I've managed to
achieve a sort of bizarre 'partial awakening' as
regards mathematics... a sort of pseudo-post-human
awareness as it were...but only after intense effort
and only at great psychological cost.  That's why you
see me 'freaking out' in some of my earlier posts...
it's possible to SEE mathematical entities.... I mean
REALLY see them... something analogous to the other
senses (like looking at a chair for instance)... a
modality.  Again, I don't think this is recommended
for most people... it's a *freaky* thing... all I can
say when I see math is f**k!  Not recommended for
humans at all mate, no way :-(

As to the Singularity: I just wish we could hurry past
all the boring mathematics and get to the cool
butt-kicking of all the world's baddies ;)

Knock out the corporations, knock out the dictators,
set up global democracy (but only on issues affecting
every-one, like the economy).  Let's have strong
global democracy for the issues affecting every-one,
but strong individual rights for individual issues - a
person's right to control their own mind and body
should NOT be subject to democratic vote.  That's the
system I expect the FAI to set up at Singularity.  






	

	
		
____________________________________________________ 
On Yahoo!7 
Messenger - Make free PC-to-PC calls to your friends overseas. 
http://au.messenger.yahoo.com 




Marc Geddes Thu Mar 30 12:07:36 BST 2006

--- BillK <pharos at gmail.com> wrote:

> 
> I think this article has some relevance here.
>
<http://www.wired.com/wired/archive/14.04/gecon.html>
> 
> Why abundance sucks, and other unexpected lessons of
> the game economy.
> By Edward Castronova
> 
> What if everything in life were free? You'd think
> we'd be happier. But
> game designers know better: We'd be bored.
> 
> Economics is loosely defined as choice under
> scarcity. After all, in
> the real world, there's only so much to go around.
> You can't always
> get what you want, and unfulfilled desires give rise
> to markets. But
> in a game world, there's no inherent reason for
> scarcity. Game
> designers have given us plenty of utopias where we
> can have all the
> mithril we want, to buy whatever we want whenever we
> want it. Problem
> is, those worlds turn out to be dull.
> etc........
> 
> -------------------------------------
> 
> So what will humans actually do in the future of
> plenty?

Well, economics and politics still operates , it's
just that everything jumps to a much higher level -
analogous to previous leaps forward such as the
agricultural and industrial revolutions...  Hanson
estimated a 150x boost to the global economy...
everyone is much richer but resources are *not*
infinite.  There'll be new challenges, new horizons. 
And politics/economics will still be with us.
 
> 
> Research?  Your AI assistant can do that better than
> you.
> 
> Design something? You have a world of products
> available to you.
> 
> Art?  Everybody is an artist now.
> 
> Go travelling?  Why bother? Send a remote
> camera/audio to have a look for you.
> 
> 
> 
> Party with other humans?  Now you're talking!
> 
> Argue about politics in the human reservation?
> Probably.
> 
> 
> BillK
> 
> _______________________________________________
> wta-talk mailing list
> wta-talk at transhumanism.org
>
http://www.transhumanism.org/mailman/listinfo/wta-talk
> 

*Marc puts on a really freaky expression and
concentratres intensely....

I SEE mathematics... I SEE the relationships between
all the metaphysical properties in reality...I SEE the
universal value system...

O.K, my head is spinning again, I've got that really
'freaked out' feeling again, but I'm learning to
handle this new perception better without going nuts
on-line now... ;)

Post-humans will discover the universal value system
I'm seeing right now and put it into practice: no
rational conscious mind can fail to value the
existence of consciousness itself... consciousness is
the highest value.  From this flows the following
corollaries:  

(1) Valuing positive Qualia (positive direct conscious
experiences - which are ends in themselves- the arts
deal mainly with this)

(2) Valuing the continuation of Qualia (is the role of
intelligence - the ability to form causal models of
reality which project into the future corresponding to
intellectual understanding - loosely equivalent to the
scientific method )

(3)  Valuing the growth (or expansion of capability)
of Qualia and the ability of consciousness to act on
the external world (is the role of will - includes the
ability to achieve desired Utilities - to tightly
'narrow the space of possibilities').

In short:  Art, Science, Philanthropy (helping others)
and the explorative (entrepreneurial) spirit.

No need for machines to make us redundant - we can be
'uplifted' to their level.

Does this answer your question as to wehat we'll be
doing? 




	

	
		
____________________________________________________ 
On Yahoo!7 
Messenger - Make free PC-to-PC calls to your friends overseas. 
http://au.messenger.yahoo.com 




Robin Hanson Fri Mar 31 16:03:17 BST 2006

James, you are acting more like a politician than a scholar here.  I 
tried to focus attention on how the specific words of your summary 
differ from the specific words of my paper that you purport to 
summarize, but you insist on trying to distill a general gestalt from 
my writings, based on a simple one-dimensional redistribution-based 
political axis.   Apparently in your mind this axis consists of good 
people on the left who support redistribution, employment, and high 
wages in the service of equality, and evil people on the right who 
seek inequality, unemployment, and low wages in the service of social 
Darwinism.   Since I predict that the technology of uploads will lead 
to unemployment for humans and low wages and Darwinian selection for 
uploads, and I only mention and endorse one possible redistribution, 
apparently not enthusiastically enough for you, I must be one of the 
evil people.   Come on!

With cheap uploads there is pretty much no way to escape 
"unemployment" for most humans.  That is, while you could give people 
make-work jobs, and/or pay them lots more than the value of their 
work, but the truth is that for most people the value of their labor 
to others would little, and if that were all they would paid they 
would not work.   Also, unless we are willing to impose population 
controls on uploads far more Draconian than those in China today, we 
could not escape uploads getting low wages and undergoing Darwinian 
selection.   The only way to induce upload wages far above the cost 
of creating uploads would be to prevent the vast majority of uploads 
from existing at all.   And the only way to avoid Darwinian selection 
among uploads would be to in addition severely limit the number of 
copies made of individual uploads.   These are not statements of 
advocacy; they are just the hard facts one would have to deal with 
under this scenario.  So are you criticizing me for not endorsing 
Draconian upload population control?

I repeat again the conclusion of my last message:
>while he favors "redistribution," it is not at all clear to me who 
>he wants to take from, and who to give to under the scenario I 
>describe.   After all, given the three distinctions of human/upload, 
>rich/poor, and few/many-copied, there are eight possible classes to consider.

To elaborate, the key reason I hesitate to more strongly endorse 
redistribution is that it is not clear who are really the "deserving 
poor" to be aided in this scenario.   In dollar terms the poorest 
would be the uploads who might be prevented from existing.  If one 
only considers the per-capita wealth of existing creatures, the 
poorest would be the many copies of those "who value life even when 
life is hard."   But these would be the richest uploads in clan 
terms, in that such clans would have the most copies; counting by 
upload clans identifies a different poor.   Humans would have far 
larger per-capita income, but many be poorer if we talk in terms of 
income relative to their subsistence level, since the subsistence 
level for uploads would be far lower than that of humans.   Should 
their not taking advantage of the option to convert from human to 
upload be held against the "poor" humans?   Finally, a few humans 
will have rare abilities to make substantial wages; does that make 
them "rich" even if they do not own much other wealth?   If you are 
going to criticize me for not explicitly supporting the 
redistribution you favor, I think you should say more precisely who 
you would take from and who you would give to.

Now for a few more detailed responses:
>If Singularitarianism wants to paint a truly attractive future, and 
>not one that simply fans the flames of Luddism, then it has to put 
>equality and social security in the foreground and not as a 
>dismissive afterthought.

My purpose is *not* to paint a truly attractive future, my purpose is 
to paint as realistic a picture as possible, whatever that may be.

>... in Oxford with you ...  When a member of the audience asked, as 
>I have in the past, whether we might not want to use some kind of 
>political method to prevent general unemployment and wealth
>concentration in this Singularitarian scenario

This did not happen.  One person asked "what does your economic model 
predict people will do" in response to improving robots, but he said 
nothing specifically about politics, employment, or wealth concentration.

>your response was, as it has been in the past and was in that paper, 
>that no one will want to prevent this coming to pass

I never said that no one would try to stop uploads.

>I'll say again: I think the scenario is a scary one, in ways that 
>you don't appear to recognize, ... although I do own stocks in 
>mutual funds today, and those stocks might benefit from a 
>Singularitarian economic
>boom, I still feel like my world and my future is being determined 
>by unaccountable elites who control my political institutions, 
>elites quite content to see vast numbers of people immiserated as 
>inequality grows.

I am well aware that the scenario I describe is scary, and also that 
many people do not trust political elites to act in  their 
interest.  I do not argue that people should trust political elites.

>"As wages dropped, upload population growth would be highly 
>selective, selecting capable people willing to work for low wages." 
>Doesn't that imply that humans would be unemployed, most uploads 
>working for upload-subsistence, and some very few uploads will be 
>raking in the big bucks? Or is the scenario one of truly universal 
>and equal poverty among all the uploads, with no wealthy owners of 
>capital anymore in the equation?

My scenario is consistent with both high and low concentration of 
ownership of capital, and with high or low inequality of wages among 
uploads.   I make no prediction about there being a few very rich uploads.

>Moravec, in Robot, argues for a universalization of Social Security 
>as a response to human structural
>unemployment caused by robot proliferation. ...  since this would 
>require state intervention I suspect you don't favor such a 
>proposal,  ... You don't really endorse redistributive, Social 
>Security or regulatory policies in the essay, but rather argue 
>against them, and you didn't even mention them at Oxford,
>and clearly consider them suboptimal, counter-productive concessions 
>to Luddites.  ...  Which pretty clearly implies that you only 
>grudgingly accept Social Security and redistributive taxes on 
>uploaded wealth accumulators as a concession to political unrest, 
>and not as an obvious and essential step
>in maintaining an egalitarian polity.

You keep jumping to conclusions.   Just because I take no position 
does not mean I am against your position.



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




Giu1i0 Pri5c0 Fri Mar 31 17:45:31 BST 2006

Now this is a really fascinating exchange of views.

Besides black and white good and bad people, there are also grey
people who believe some degree of inequality is a sad fact of life,
but think some degree of redistribution should be used to avoid
excessive unfairness. I am one of the grey people, but I think also
Robin and James are grey (perhaps with slightly different shades of
grey).

In Robin's scenario, the most unfortunate uploads provide a very
dramatic example of rich-eating-poor.

No solution to offer, but a consideration. Upload tech will not be
developed in one day - there may be 20 years between first sucecssful
lab experiments and wid deployment. So there will be (I hope) enough
time to work out the details of a society with uploads.
G.

On 3/31/06, Robin Hanson <rhanson at gmu.edu> wrote:
> James, you are acting more like a politician than a scholar here.  I
> tried to focus attention on how the specific words of your summary
> differ from the specific words of my paper that you purport to
> summarize, but you insist on trying to distill a general gestalt from
> my writings, based on a simple one-dimensional redistribution-based
> political axis.   Apparently in your mind this axis consists of good
> people on the left who support redistribution, employment, and high
> wages in the service of equality, and evil people on the right who
> seek inequality, unemployment, and low wages in the service of social
> Darwinism.   Since I predict that the technology of uploads will lead
> to unemployment for humans and low wages and Darwinian selection for
> uploads, and I only mention and endorse one possible redistribution,
> apparently not enthusiastically enough for you, I must be one of the
> evil people.   Come on!...




Hughes, James J. Fri Mar 31 18:03:10 BST 2006

> Since I predict that the technology of uploads will lead 
> to unemployment for humans and low wages and Darwinian 
> selection for uploads, and I only mention and endorse one 
> possible redistribution, apparently not enthusiastically 
> enough for you, I must be one of the 
> evil people.  

I don't think you are evil. I just think you share the worldview of many
American economists, and most of the 1990s transhumanists, who prefer a
minarchist, free-market oriented approach to social policy, and do not
see redistribution and regulation as desirable or inevitable. My book
was a critique of that point of view, and I used your article as a
brilliant paradigmatic example of it. Empirically, the people who are
most attracted to libertarianism, neo-liberalismo or whatever are those
who most likely to benefit from those policies, affluent men in the
North. My challenge to you, and all of us, is that we need to break out
of those blinkers are try to see the world from the perspective of the
billions who live on dollars a day, and those who are quite suspicious
of emerging technologies because they are used to bomb them or exploit
them, and whose benefits are often inaccessible.

As to your assertion that your piece is merely descriptive and not
normative, I leave that to the reader to judge:

http://hanson.gmu.edu/uploads.html

To me it is clear that you are excited about this future (A "Crack of a
Future Dawn" after all) and see it as a desirable one with universal
advantages, a future that should not be slowed or regulated by state
intervention. So you are about as non-normative as Karl Marx in Das
Kapital - here is how the system works, here is our inevitable future,
here is how people will react, and here is how we will end up in
paradise. No, no normative analysis needed in techno-utopian determinism
- we either get with the program, or end up on the dustbin of history.

> unless we are willing to impose population 
> controls on uploads far more Draconian than those in China 
> today, we could not escape uploads getting low wages and 
> undergoing Darwinian selection. 

Rights do not exist in isolation. Reproductive rights have to be
balanced against others, such as the right to life, liberty and the
pursuit of happiness. Aubrey, for instance, has been quite clear in
emphasizing that we will inevitably need to consider limits on
reproduction if we have unlimited life expectancy. Uploading and space
exploration only moves out the necessity.

In addition, potential future people, uploads or human, do not have
rights, only existing people do.  So I do think reproductive control on
uploads would make perfect sense, and would be one of the policies that
should be pursued if we were faced with your scenario.

In effect your scenario is one version of the runaway AI scenario, with
individual viral egos instead of one monolithic AI, and I see both as
existential risks that we need transhumanist policies to prevent, not to
facilitate.

>  The only way to induce upload wages far above the cost 
> of creating uploads would be to prevent the vast majority of uploads 
> from existing at all.  

Why isn't then population control the only way to induce human wages to
rise? Yes, labor supply does effect wages, but so do government policies
like worker safety laws, taxation and minimum wages. The fact that these
policies are completely off your radar is part of the problem.

> And the only way to avoid Darwinian selection 
> among uploads would be to in addition severely limit the number of 
> copies made of individual uploads.  

Again you reveal a Social Darwinist view without any acknowledgement
that there can be collective solutions to social problems. Of course we
can prevent the forces of social selection from killing off all the
beings who don't want to work for low wages, and selecting for the
diligent subsistence drones. If there is such a population pressure we
create new selection parameters to encourage or require other population
traits. But again, the notion of social engineering is apparently
anathema.

An example: clearly employers already prefer human workers who work long
hours, are perfectly loyal, and never organize for collective benefits.
To the extent that there are psychopharmaceuticals and cybernetics that
allow employers to "perfect" their workers there will be efforts to
apply them.

So we pass laws that even if we all get to take Modafanil, no one can
work more than 50 hours a week. We pass laws against loyalty
drugs/chips, just as we once outlawed serfdom and company towns. We pass
collective bargaining laws that mandate that all uploads need to use at
least 30% of their CPU cycles for personal, non-remunerative enrichment.

Without these kinds of policies we could drift toward hive-mind drone
existences, losing individual subjective agency, which is one of the
existential threats pointed to by Chairman Bostrom.

> >while he favors "redistribution," it is not at all clear to 
> me who he 
> >wants to take from, and who to give to under the scenario I
> >describe.   After all, given the three distinctions of human/upload, 
> >rich/poor, and few/many-copied, there are eight possible 
> classes to consider.

Rich -> Poor will do nicely thank you, regardless of their number or
instantiation.

> To elaborate, the key reason I hesitate to more strongly 
> endorse redistribution is that it is not clear who are really 
> the "deserving 
> poor" to be aided in this scenario. 

Yes, "deserving poor" is part of the problem. The desirability of rough
social equality does not depend on any notion of "deservingness".

> I do not argue that people should trust 
> political elites.

No, only the unfettered market. Is there any form of law, state or
collective action other than market exchange in your imagined Dawn?

> My scenario is consistent with both high and low 
> concentration of ownership of capital, and with high or low 
> inequality of wages among 
> uploads.   I make no prediction about there being a few very 
> rich uploads.

Sadly, reality is not consistent with the notion that there will be a
new era of equality with radical technological change. The
winners/owners will change, but any equality to be achieved is something
we have to fight for, not something to be fervently wished for.

> Just because I take no position 
> does not mean I am against your position.

Robin, I don't think you have ever taken my position(s) seriously enough
to reject them - they simply are alien to the kind of economic analysis
that you do. I wish you would take them seriously enough to explicitly
reject them so we could have that conversation.

J.




Russell Wallace Fri Mar 31 18:47:11 BST 2006

On 3/31/06, Hughes, James J. <james.hughes at trincoll.edu> wrote:
>
>
> So we pass laws that even if we all get to take Modafanil, no one can
> work more than 50 hours a week. We pass laws against loyalty
> drugs/chips, just as we once outlawed serfdom and company towns. We pass
> collective bargaining laws that mandate that all uploads need to use at
> least 30% of their CPU cycles for personal, non-remunerative enrichment.
>
> Without these kinds of policies we could drift toward hive-mind drone
> existences, losing individual subjective agency, which is one of the
> existential threats pointed to by Chairman Bostrom.
>

I agree with you that this is a potential problem, but rather than rely on a
monolithic government to legislate our way out of them (which has well known
problems of its own), I will suggest that this is exactly the sort of thing
my Domain Protection idea is designed for:

http://homepage.eircom.net/~russell12/dp.html



Russell Wallace Fri Mar 31 18:55:06 BST 2006

On 3/31/06, Robin Hanson <rhanson at gmu.edu> wrote:
>
> James, you are acting more like a politician than a scholar here.  I
> tried to focus attention on how the specific words of your summary
> differ from the specific words of my paper that you purport to
> summarize, but you insist on trying to distill a general gestalt from
> my writings, based on a simple one-dimensional redistribution-based
> political axis.


I think there is more merit to James' position than you are giving him
credit for. I found your article to be well thought out as far as it went,
but it is not enough to say "here's what the future will be like... oh,
guess it's going to suck, too bad". People's reaction to that will be to not
have a future at all, in that case. If we who specialize in this stuff,
who've spent all this time thinking about it, just leave it at that, what
good are we?

No, if we think the default future will suck, what we need to do is to say
"...and here's a proposal for how to improve the chances that it doesn't
suck!" And while I don't think James' proposals will achieve this objective,
I think he's right about that being the direction we need to be thinking in.



Marcelo Rinesi Fri Mar 31 18:43:22 BST 2006

Rushing in where angels and AIs fear to thread...

The notion that -devoid of legal, societal or otherwise restrictions,
assuming that they will be possible and cheap, assuming that they will
behave roughly as Von Neumann-Morgenstern utility maximizers, etc-
uploads will eventually displace humans from most of the economic system
and then compete fiercely between themselves seems reasonable under the
light of what we know of economics (substitute for "game theory" if you
will or even "what *I* would do if I woke up uploaded"). The
qualifications "devoid of legal, etc" are critical in this paragraph, of
course. Change the parameters and the model results change; to some
degree, the polemical question is not that the model is wrong, but what
end results would be desirable, which ones of those end results would be
possible, and what parameters would take us there.

That some variants of this scenario won't be very enjoyable for humans
(and even apocalyptic for those populations without access to these
technologies) is also at the very least possible [Guido Nuñez has
devoted a lot of thought, in a similar issue, to the possibility of
nanotech impacting negatively on Third World economies by disrupting the
demand for what little tradeable resources (minerals, etc) we have.]

Thus it seems to me that Robin's model  leads strength to James'
argument for some sort of regulations, while James' regulatory framework
(although it is, as Giulio says, perhaps too early to figure out the
exact details or even the rough nature of such a framework) might prove
necessary to make an uploads/non-uploads future a relatively pleasant
one, and thus implementable.

It's not an either/or proposition. In fact, I'd argue that in this case
-given the unprecedented nature of the economic changes involved- there
*won't* be any effective (in human-wellbeing terms) regulatory framework
without a very solid understanding of the economics of upload (you can't
regulate what you can't model). And on the other hand, an uploaded but
unregulated economy (as far as our models tells us) might well turn out
to be highly problematic in terms of wellbeing.

I *want* uploads (and AI, and the whole enchilada). That's why I want a
good, hard look at them before we plug them on. The possible side
effects of them -as with any technology- *cannot* be an excuse not to go
ahead.

It's just an incentive to be smart about it. *g*

Marcelo




Hughes, James J. Fri Mar 31 19:18:14 BST 2006

> I agree with you that this is a potential problem, but rather 
> than rely on a monolithic government to legislate our way out 
> of them (which has well known problems of its own), I will 
> suggest that this is exactly the sort of thing my Domain 
> Protection idea is designed for:
> 
> http://homepage.eircom.net/~russell12/dp.html

As I understand your proposal Russell, it is that we would ask the
world-controlling Friendly AI to set up regions that are not allowed to
interfere with one another, one for uploads and one for ur-humans.

This of course broaches the problems that we face today with the
enforcement of international agreements that countries should not invade
one another.

A) There are sometimes good reasons for countries to be invaded, as when
they pose a threat to the rest or are violating human rights.

B) There needs to be a legitimate, accountable global authority to
enforce those agreements, and unfortunately the US Presidency is not
such an authority

I don't see how a Friendly AI gets us there. If it has the kind of power
necessary, it is clearly monolithic. If it is legitimate, but not
accountable, it's a benevolent monarchy (cross your fingers). If it is
legitimate and accountable (replace-able, control-able) then it is a
part of global democratic governance.

J.




Dawn Amato Fri Mar 31 19:18:23 BST 2006

Hi Marc, 
  Have you ever read about Paul Erdos, notorius math super-star?  I always considered him the "
Elvis" of math, but you are right, he was not really good at hanging out in reality.  He consid
ered anybody who did not do math to be "dead".  His friends would think "oh, no. I di
dn't know so-an-so was dead" and then find out he was alive but had just stopped doing math.  I
f somebody really died, Paul Erods would say they had "left the building".  And he started
 that saying, it was picked up and used by others later. 
   
   And I agree with your post. Libertarians refuse to recognize the obvious, that is that the strong
 will crush the weak unless reigned in. In a perfect democracy, laws are supposed to be on the books
 to protect the weak, not the strong. No, we don't live in a perfect democracy, but it gives us some
thing to strive for. 
   
  D. Amato
   
  Paul Erdös   1913-1996   "My mother said, `Even you, Paul, can be in only one place at o
ne time.'
Maybe soon I will be relieved of this disadvantage.
Maybe, once I've left, I'll be able to be in many places at the same time.
Maybe then I'll be able to collaborate with Archimedes and Euclid." 
   
  http://theory.cs.uchicago.edu/erdos.html
   
  http://www.oakland.edu/enp/
   
  P.S. I am a math groupie, not a mathmatician.  I like brainy guys. 
  

Marc Geddes <m_j_geddes at yahoo.com.au> wrote:
  
--- "Hughes, James J." 
wrote:

> If Singularitarianism wants to paint a truly
> attractive future, and not
> one that simply fans the flames of Luddism, then it
> has to put equality
> and social security in the foreground and not as a
> dismissive
> afterthought. To his credit Moravec, in Robot,
> argues for a
> universalization of Social Security as a response to
> human structural
> unemployment caused by robot proliferation. Marshall
> Brain reached the
> same conclusion, and several of the principals at
> the IEET and I are
> supporters of the concept of a Basic Income
> Guarantee. But since this
> would require state intervention I suspect you don't
> favor such a
> proposal, which is why you advocate(d) minarchist
> solutions like
> universal stock ownership in the Singularity.
> 

Well, as you know Dr J, I started out a moderate
Libertarian but was eventually converted to your
view-point - turns out I was a closet left-liberal
after all. It's now hard for me to believe that I
could ever have given Libertarianism any credence. 
There's just *so* much empirical evidence against it. 
Also read your book, thought it was quite good. 
Dealt very well with the political side of things. 

Libertarianism deals in idealizations which bear
little relation to human nature as is (communism had
the same problem). Mostly it takes a hugely
over-optimistic view of human nature - Libertarians
like to believe that they're super-competent and
super-rational - that their winners. But then I
realized - hey wait a minute - what if I'm not one of
the winners? What if I'm not a super-genius? 

Like me...thinking I could take on Eliezer and code up
Singularity in a couple of weeks...

I kept reaching for the math skills to blast 'em (the
SL4 crowd), but where my math skills should have been
there was just a hole :-( 

I kept reaching and reaching for the math to blast all
the world's baddies but the post-human skills are just
not there :-( It's a horrible horrible feeling.

We all want to be X-Men - it's a really cool fantasy -
unfortunately for most of us - the reality is pretty
dire - we're weak and stupid :-(

After a few years of intense effort I've managed to
achieve a sort of bizarre 'partial awakening' as
regards mathematics... a sort of pseudo-post-human
awareness as it were...but only after intense effort
and only at great psychological cost. That's why you
see me 'freaking out' in some of my earlier posts...
it's possible to SEE mathematical entities.... I mean
REALLY see them... something analogous to the other
senses (like looking at a chair for instance)... a
modality. Again, I don't think this is recommended
for most people... it's a *freaky* thing... all I can
say when I see math is f**k! Not recommended for
humans at all mate, no way :-(

As to the Singularity: I just wish we could hurry past
all the boring mathematics and get to the cool
butt-kicking of all the world's baddies ;)

Knock out the corporations, knock out the dictators,
set up global democracy (but only on issues affecting
every-one, like the economy). Let's have strong
global democracy for the issues affecting every-one,
but strong individual rights for individual issues - a
person's right to control their own mind and body
should NOT be subject to democratic vote. That's the
system I expect the FAI to set up at Singularity. 










____________________________________________________ 
On Yahoo!7 
Messenger - Make free PC-to-PC calls to your friends overseas. 
http://au.messenger.yahoo.com 

_______________________________________________
wta-talk mailing list
wta-talk at transhumanism.org
http://www.transhumanism.org/mailman/listinfo/wta-talk


			
---------------------------------
Yahoo! Messenger with Voice. PC-to-Phone calls for ridiculously low rates.



Robin Hanson Fri Mar 31 19:40:02 BST 2006

At 12:03 PM 3/31/2006, James Hughes wrote:
>I don't think you are evil. I just think you share the worldview of many
>American economists, and most of the 1990s transhumanists, who prefer a
>minarchist, free-market oriented approach to social policy, and do not
>see redistribution and regulation as desirable or inevitable. ...
> > I do not argue that people should trust  political elites.
>No, only the unfettered market. Is there any form of law, state or
>collective action other than market exchange in your imagined Dawn?

You keep making these false statements about me, which I deny.   I teach
economics and in most lectures I make statements about the desirability
and inevitability of regulation and redistribution.   Really.

>... Yes, labor supply does effect wages, but so do government policies
>like worker safety laws, taxation and minimum wages. The fact that these
>policies are completely off your radar is part of the problem.

I am well aware of such policies.  But my claim that in this context they
would "prevent the vast majority of uploads from existing at all" if they
raised wages a lot remains true.

> > while he favors "redistribution," it is not at all clear to me who he
> > wants to take from, and who to give to under the scenario I
> > describe.   After all, given the three distinctions of human/upload,
> > rich/poor, and few/many-copied, there are eight possible
> > classes to consider.
>
>Rich -> Poor will do nicely thank you, regardless of their number or
>instantiation.

I gave a long analysis showing how there were at least five different
ways to conceive of who are the "poor" in such a scenario, and I have
twice now asked you to clarify which of these groups you want to favor
with redistribution.   You complain that I have not supported "redistribution"
but without clarification this can only be a generic slogan.



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




Russell Wallace Fri Mar 31 19:43:12 BST 2006

On 3/31/06, Hughes, James J. <james.hughes at trincoll.edu> wrote:
>
> As I understand your proposal Russell, it is that we would ask the
> world-controlling Friendly AI to set up regions that are not allowed to
> interfere with one another, one for uploads and one for ur-humans.


Basically yes. (More specifically I would see there being many domains,
including ones where uploads are allowed do as they liked, ones where
uploads are restricted, ones where uploads aren't allowed at all etc.)

This of course broaches the problems that we face today with the
> enforcement of international agreements that countries should not invade
> one another.
>
> A) There are sometimes good reasons for countries to be invaded, as when
> they pose a threat to the rest or are violating human rights.
>
> B) There needs to be a legitimate, accountable global authority to
> enforce those agreements, and unfortunately the US Presidency is not
> such an authority


Indeed so!

I don't see how a Friendly AI gets us there. If it has the kind of power
> necessary, it is clearly monolithic. If it is legitimate, but not
> accountable, it's a benevolent monarchy (cross your fingers). If it is
> legitimate and accountable (replace-able, control-able) then it is a
> part of global democratic governance.
>

It would be monolithic, but it would not be a government; conversely, there
would be governments, but they would not be monolithic; so the fatal
combination of the two attributes in one entity would be avoided.

I should clarify (in fact, I should write a second draft of the original
document with said clarifications, but for the usual problem, lack of time)
that my concept of "Friendly AI" is much more limited than some people's.
Let me be more specific and call it the Domain Protection Agent (DPA).

I'm not counting on the DPA being an unfathomably wise and benevolent entity
that could be trusted to make all our decisions for us - if such
unfathomable wisdom and benevolence can be achieved, great, but I'm not
relying on it. I'm counting on something that's smart enough to understand
and carry out our orders, wise enough to understand there are some orders
that shouldn't be carried out, lacking in the lust for power that's a
hardwired part of the human genetic heritage, and last but far from least,
_durable_.

A government is an entity that rules as it sees fit, within the very loose
boundaries set by constitutions and the political process. There are lots of
really huge, well known problems with just trusting any entity to do that
within one nation for a mere few centuries (having it be an institution as
in democracy rather than an individual as in monarchy or dictatorship is of
course the worst solution apart from all the others). No entity can be
trusted to do this for the whole world for so much as a century let alone
for all time - so I propose that no entity be allowed to do so. The DPA's
sole task is to enforce the domains. Government is an internal function for
each domain separately.



Russell Wallace Fri Mar 31 19:49:11 BST 2006

On 3/31/06, Dawn Amato <blaseparrot at yahoo.com> wrote:
>
>    And I agree with your post. Libertarians refuse to recognize the
> obvious, that is that the strong will crush the weak unless reigned in.


This is why I don't advocate doing away with all laws...

In a perfect democracy, laws are supposed to be on the books to protect the
> weak, not the strong.


...only the ones whose actual (rather than supposed) function is to oppress
the weak... which unfortunately, human nature being what it is, is most of
them.

No, we don't live in a perfect democracy, but it gives us something to
> strive for.


Indeed.



Robin Hanson Fri Mar 31 19:55:09 BST 2006

At 12:43 PM 3/31/2006, Marcelo Rinesi <mrinesi at fibertel.com.ar> wrote:
>The notion that -devoid of legal, societal or otherwise restrictions,
>assuming that they will be possible and cheap, assuming that they will
>behave roughly as Von Neumann-Morgenstern utility maximizers, etc-
>uploads will eventually displace humans from most of the economic system
>and then compete fiercely between themselves seems reasonable under the
>light of what we know of economics (substitute for "game theory" if you
>will or even "what *I* would do if I woke up uploaded"). The
>qualifications "devoid of legal, etc" are critical in this paragraph, of
>course. Change the parameters and the model results change; to some
>degree, the polemical question is not that the model is wrong, but what
>end results would be desirable, which ones of those end results would be
>possible, and what parameters would take us there.

Yes, that is just how economic theorists like myself work.   We first 
create a baseline model, the simplest one we can come up with that 
describes the essence of some situation, and then we vary that model 
to explore the effects of both various outside influences and of 
possible policies one might choose.   The simplest model of most 
situations tends to be a low regulation model, but that does not mean 
that we are recommending no regulation.  That is just usually the 
best starting point for considering variations.



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




Hughes, James J. Fri Mar 31 20:15:01 BST 2006

> You keep making these false statements about me, which I 
> deny. 

I'm sorry you think I'm misrepresenting you. Of course you know about
the political side of political economy, and I'm sure you teach about
it. What I keep wanting is more realistic application and advocacy of
the legitimate role of democratic deliberation and law in your writing. 

You are associated, for instance, with "ideas markets" and market-based
approaches to aggregating social preferences as a way to replace
democratic mechanisms. As I said, I think your proposals are interesting
and I would love to see the results of the experiments. But they do
indicate a directionality in your work over the last fifteen years,
arguing for a shift from reliance on democratic deliberation to market
mechanisms. 

Isn't that the case? Isn't it fair to characterize you as a libertarian
economist? 

> worker safety laws, taxation and minimum wages...
> would "prevent the vast majority of uploads from 
> existing at all" if they raised wages a lot remains true.

Yes, we agree about that. If we regulated uploads in certain ways it
would restrict the incentive to clone/bud/build more of them. Just like
passing laws that you have to send your kids to school instead of work
them to death in the fields or factories changes kids from exploitable
labor into luxury consumables, reducing the economic incentive to have
them.

> I gave a long analysis showing how there were at least five 
> different ways to conceive of who are the "poor" in such a 
> scenario, and I have twice now asked you to clarify which of 
> these groups you want to favor
> with redistribution.   You complain that I have not supported 
> "redistribution"
> but without clarification this can only be a generic slogan.

Your examples are interesting, and worthy of additional discussion, but
I really don't have to parse them before I can advocate a general
principle that I want to live in a roughly equal society.

But I'll make a stab: in other writing I've pointed to the fact that
liberal democracy is founded on the consensual myth of the discrete,
continuous, autonomous individual. To the extent that neurotechnology
erodes that consensual illusion, it fundamentally problematizes liberal
democracy (and "the market"). I call that the "political Singularity,"
and I don't mean that in a whoopee! way.

So the problem you pose of whether a "clan" of upload clones, all
sharing core identity software, should be treated as one - very rich -
individual or a bazillion very poor individuals is a really serious
problem for the future. Perhaps we will need a bicameral legislature,
like the US Senate and House, one based on personalities and the other
on bodies. 

I don't know and I find the prospect very troubling. I would like to
live in a world, like Brin's Kiln People, where I could send a copy of
myself to work while the base-unit me stays home to read and cook. But
in Brin's world, even though the clones only last 48 hours, they still
have existential crises about whether they are the same as the base
person, or a separate person with a right to life. We have yet to come
up with a good solution to these dilemmas, which may be another reason
to phase them in cautiously. 

J.





Robin Hanson Fri Mar 31 21:39:45 BST 2006

At 02:15 PM 3/31/2006, James Hughes wrote:
> >> I just think you ... do not see redistribution and regulation as
> >> desirable or inevitable
> > You keep making these false statements about me, which I deny.
>
>I'm sorry you think I'm misrepresenting you....
>You are associated, for instance, with "ideas markets" and market-based
>approaches to aggregating social preferences as a way to replace
>democratic mechanisms.... But they do indicate a ... shift from reliance on
>democratic deliberation to market mechanisms.   Isn't that the case?
>Isn't it fair to characterize you as a libertarian economist?

No, it is not fair to characterize me as a libertarian 
economist.  Some of my colleagues perhaps, but not me.   You have 
been so far complaining that since I did not talk much about 
regulation in my uploads paper, that I must be hostile to the idea 
and unaware of the regulatory issues you hold dear.   I have been 
trying to explain that I am aware of such issues and remain open to 
regulation, but that a low regulation analysis is usually the best 
first analysis step in economic analysis.   I had thought a bit about 
upload regulation, but it is a messy situation and I felt uncertain, 
and so I choose not to say anything in that twelve year old paper.

The subject of "idea futures" as applied to government policy is 
about *how* we should chose regulation.   It is not itself pro or 
anti regulation.  Yes, I've advocated trying out markets to choose 
regulation, but that doesn't make me agaisnt democratic 
deliberation.   For example, I am a fan of James Fishkin's 
experiments in deliberative democracy mechanisms.

>>I gave a long analysis showing how there were at least five 
>>different ways to conceive of who are the "poor" in such a 
>>scenario, and I have twice now asked you to clarify which of these 
>>groups you want to favor with redistribution.   You complain that I 
>>have not supported  "redistribution" but without clarification this 
>>can only be a generic slogan.
>
>Your examples are interesting, and worthy of additional discussion, 
>but I really don't have to parse them before I can advocate a 
>general principle that I want to live in a roughly equal society.

Well that is a key difference in our styles.  "Equal society" is too 
vague a slogan for me to endorse.  ("Equal in what?" my internal 
critic screams.)   I would rather not take a public position if I 
cannot find something clearer to endorse.   But please do not mistake 
my lack of many positions on upload regulation in my first uploads 
paper for not my caring about or being aware of regulatory issues.

FYI, regarding the questions I posed, my current leanings are that 
creatures who might exist should count in our moral calculus, that 
upload copies will diverge quickly enough that they should mostly be 
treated separately, instead of as clans, that the ability of humans 
to earn substantial wages should not matter much beyond its 
contribution to their income, and that while the fact that the human 
subsistence levels are higher should be a consideration, that 
consideration is greatly weakened when humans reject the option to 
convert into cheaper-to-assist uploads.   Your intuitions may differ, 
but I don't think anyone should feel very confident about such opinions.



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




Hughes, James J. Fri Mar 31 23:12:48 BST 2006

> it is not fair to characterize me as a libertarian 
> economist. 

Excellent. Delighted to hear it. 

> I am a fan of James Fishkin's 
> experiments in deliberative democracy mechanisms.

Excellent. Me too. I think they complement the idea markets mechanism
nicely in our promotion of participatory models of future governance.

> my current leanings are 
> that creatures who might exist should count in our moral 
> calculus, 

Hmm. A long-standing debate in utilitarian theory as you know. Clearly
we want to make policy that will ensure the greatest possible happiness
for all the beings that exist in the future, even though we are not
obliged to bring them into existence. It seems like your model in Dawn,
if we interpret it as normative rather than descriptive, would fit with
"the repugnant conclusion" of utilitarianism that we should create as
many beings as possible, even if each of them might have less happy
lives, because we will thereby create a greater sum of happiness than by
creating fewer, happier beings. Is that what you mean?

> that upload copies will diverge quickly enough that 
> they should mostly be treated separately, instead of as 
> clans, 

I would agree, but it depends on how much they are extensions of the
primary subjective "parent." One can imagine one consciousness shared
across many bodies or upload clones, tightly networked, where separate
self-identity never arises. The Borgian possibility.

> that the ability of humans to earn substantial wages 
> should not matter much beyond its contribution to their 
> income, 

Not sure what you mean there.

> and that while the fact that the human subsistence 
> levels are higher should be a consideration, that 
> consideration is greatly weakened when humans reject the option to 
> convert into cheaper-to-assist uploads.  

I make the same argument about human enhancement and disability. I'm
happy to have the Americans with Disability Act urge accomodation of the
disabled in the workplace. But to the extent that disability becomes
chosen in the future (refusal of spinal repair, sight replacement,
cochlear implants and so on) it weakens the moral case for accomodation.


In that sense, if neo-Amish humans refuse to become faster, more able
uploads their case for accomodation of their decision is weak. But
framing all humans who decide to remain organic as undeserving,
self-cripplers in a brave new uploaded world is part of the political
challenge your essay points us to. We need to come up with a more
attractive frame for the co-accomodation of organic and upload life.

J.





Samantha Atkins Fri Mar 31 23:38:40 BST 2006

In a world of true abundance there is no necessity for the strong to
crush the weak.  I do not see it as moral to rein in the more capable
for the supposed sake of the less capable.  That simply lowers overall
capability of humankind.   If the minimal level of prosperity is high
enough and the ability to enhance one's capabilities is open then I
see no particular worth in bemoaning that some are more capable and
have more than others.  It seems to me that such arguments only have
meaning under an assumption of non-abundance and a more less zero-sum
situation.

- samantha


On 3/31/06, Dawn Amato <blaseparrot at yahoo.com> wrote:
> Hi Marc,
>   Have you ever read about Paul Erdos, notorius math super-star?  I always considered hi
m the "Elvis" of math, but you are right, he was not really good at hanging out in reality
.  He considered anybody who did not do math to be "dead".  His friends would think "
oh, no. I didn't know so-an-so was dead" and then find out he was alive but had just stopped do
ing math.  If somebody really died, Paul Erods would say they had "left the building".  An
d he started that saying, it was picked up and used by others later.
>
>    And I agree with your post. Libertarians refuse to recognize the obvious, that is tha
t the strong will crush the weak unless reigned in. In a perfect democracy, laws are supposed to be 
on the books to protect the weak, not the strong. No, we don't live in a perfect democracy, but it g
ives us something to strive for.
>
>   D. Amato
>
>   Paul Erdös   1913-1996   "My mother said, `Even you, Paul, can be in only on
e place at one time.'
> Maybe soon I will be relieved of this disadvantage.
> Maybe, once I've left, I'll be able to be in many places at the same time.
> Maybe then I'll be able to collaborate with Archimedes and Euclid."
>
>   http://theory.cs.uchicago.edu/erdos.html
>
>   http://www.oakland.edu/enp/
>
>   P.S. I am a math groupie, not a mathmatician.  I like brainy guys.
>
>
> Marc Geddes <m_j_geddes at yahoo.com.au> wrote:
>
> --- "Hughes, James J."
> wrote:
>
> > If Singularitarianism wants to paint a truly
> > attractive future, and not
> > one that simply fans the flames of Luddism, then it
> > has to put equality
> > and social security in the foreground and not as a
> > dismissive
> > afterthought. To his credit Moravec, in Robot,
> > argues for a
> > universalization of Social Security as a response to
> > human structural
> > unemployment caused by robot proliferation. Marshall
> > Brain reached the
> > same conclusion, and several of the principals at
> > the IEET and I are
> > supporters of the concept of a Basic Income
> > Guarantee. But since this
> > would require state intervention I suspect you don't
> > favor such a
> > proposal, which is why you advocate(d) minarchist
> > solutions like
> > universal stock ownership in the Singularity.
> >
>
> Well, as you know Dr J, I started out a moderate
> Libertarian but was eventually converted to your
> view-point - turns out I was a closet left-liberal
> after all. It's now hard for me to believe that I
> could ever have given Libertarianism any credence.
> There's just *so* much empirical evidence against it.
> Also read your book, thought it was quite good.
> Dealt very well with the political side of things.
>
> Libertarianism deals in idealizations which bear
> little relation to human nature as is (communism had
> the same problem). Mostly it takes a hugely
> over-optimistic view of human nature - Libertarians
> like to believe that they're super-competent and
> super-rational - that their winners. But then I
> realized - hey wait a minute - what if I'm not one of
> the winners? What if I'm not a super-genius?
>
> Like me...thinking I could take on Eliezer and code up
> Singularity in a couple of weeks...
>
> I kept reaching for the math skills to blast 'em (the
> SL4 crowd), but where my math skills should have been
> there was just a hole :-(
>
> I kept reaching and reaching for the math to blast all
> the world's baddies but the post-human skills are just
> not there :-( It's a horrible horrible feeling.
>
> We all want to be X-Men - it's a really cool fantasy -
> unfortunately for most of us - the reality is pretty
> dire - we're weak and stupid :-(
>
> After a few years of intense effort I've managed to
> achieve a sort of bizarre 'partial awakening' as
> regards mathematics... a sort of pseudo-post-human
> awareness as it were...but only after intense effort
> and only at great psychological cost. That's why you
> see me 'freaking out' in some of my earlier posts...
> it's possible to SEE mathematical entities.... I mean
> REALLY see them... something analogous to the other
> senses (like looking at a chair for instance)... a
> modality. Again, I don't think this is recommended
> for most people... it's a *freaky* thing... all I can
> say when I see math is f**k! Not recommended for
> humans at all mate, no way :-(
>
> As to the Singularity: I just wish we could hurry past
> all the boring mathematics and get to the cool
> butt-kicking of all the world's baddies ;)
>
> Knock out the corporations, knock out the dictators,
> set up global democracy (but only on issues affecting
> every-one, like the economy). Let's have strong
> global democracy for the issues affecting every-one,
> but strong individual rights for individual issues - a
> person's right to control their own mind and body
> should NOT be subject to democratic vote. That's the
> system I expect the FAI to set up at Singularity.
>
>
>
>
>
>
>
>
>
>
> ____________________________________________________
> On Yahoo!7
> Messenger - Make free PC-to-PC calls to your friends overseas.
> http://au.messenger.yahoo.com
>
> _______________________________________________
> wta-talk mailing list
> wta-talk at transhumanism.org
> http://www.transhumanism.org/mailman/listinfo/wta-talk
>
>
>
> ---------------------------------
> Yahoo! Messenger with Voice. PC-to-Phone calls for ridiculously low rates.
> _______________________________________________
> wta-talk mailing list
> wta-talk at transhumanism.org
> http://www.transhumanism.org/mailman/listinfo/wta-talk
>




Robin Hanson Fri Mar 31 23:51:59 BST 2006

At 05:12 PM 3/31/2006, James Hughes wrote:
>>my current leanings are that creatures who might exist should count 
>>in our moral calculus,
>
>Hmm. A long-standing debate in utilitarian theory as you know.

Yup.

>Clearly we want to make policy that will ensure the greatest 
>possible happiness for all the beings that exist in the future, even 
>though we are not obliged to bring them into existence.

I know many disagree on this point, but it seems to me that bringing 
creatures into existence with lives worth living should count as a 
moral good thing, just as I appreciate others having created me and I 
think they did a good thing worth praise.   If so, the prevention of 
vast numbers of uploads must weigh against policies to greatly 
increase per-upload wages.  But this need not be decisive of course.

>It seems like your model in Dawn, if we interpret it as normative 
>rather than descriptive, would fit with
>"the repugnant conclusion" of utilitarianism that we should create 
>as many beings as possible, even if each of them might have less 
>happy lives, because we will thereby create a greater sum of happiness than by
>creating fewer, happier beings. Is that what you mean?

The "repugnant conclusion" has never seemed repugnant to me, which is 
another way I guess I disagree with others in population 
ethics.   But yes this upload scenario offers a concrete application 
of such issues.

>In that sense, if neo-Amish humans refuse to become faster, more 
>able uploads their case for accomodation of their decision is weak. 
>But framing all humans who decide to remain organic as undeserving, 
>self-cripplers in a brave new uploaded world is part of the 
>political challenge your essay points us to. We need to come up with 
>a more attractive frame for the co-accomodation of organic and upload life.

I don't know if a better frame can be found, but I'd be happy to hear of one.



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




Dawn Amato Sat Apr 1 02:16:52 BST 2006

When I say "rein in" Samantha, I am refering to the fact (as pretty much proved by AL
L of history) that human beings more often then not are apt to use each other for personal gain or p
rofit with one of the participants coming out as the loser. To deny this fact is to live in some la-
la nirvana cloud dream.  Will human nature change substantially just because some people can now thi
nk faster and are smarter?  What guarantees do you offer that human nature will evolve along with hu
man math abilities? Are very smart people kinder then less smart people? I don't see any evidence of
 this, in fact, the politics and in-fighting at Universities where all our supposedly smartest peopl
e congregate can be cutthroat and down right lethal.
   
   
  D. Amato

Samantha Atkins <sjatkins at gmail.com> wrote:
  In a world of true abundance there is no necessity for the strong to
crush the weak. I do not see it as moral to rein in the more capable
for the supposed sake of the less capable. That simply lowers overall
capability of humankind. If the minimal level of prosperity is high
enough and the ability to enhance one's capabilities is open then I
see no particular worth in bemoaning that some are more capable and
have more than others. It seems to me that such arguments only have
meaning under an assumption of non-abundance and a more less zero-sum
situation.

- samantha


On 3/31/06, Dawn Amato wrote:
> Hi Marc,
> Have you ever read about Paul Erdos, notorius math super-star? I always considered him t
he "Elvis" of math, but you are right, he was not really good at hanging out in reality. H
e considered anybody who did not do math to be "dead". His friends would think "oh, n
o. I didn't know so-an-so was dead" and then find out he was alive but had just stopped doing m
ath. If somebody really died, Paul Erods would say they had "left the building". And he st
arted that saying, it was picked up and used by others later.
>
> And I agree with your post. Libertarians refuse to recognize the obvious, that is that t
he strong will crush the weak unless reigned in. In a perfect democracy, laws are supposed to be on 
the books to protect the weak, not the strong. No, we don't live in a perfect democracy, but it give
s us something to strive for.
>
> D. Amato
>
> Paul Erdös 1913-1996 "My mother said, `Even you, Paul, can be in only one plac
e at one time.'
> Maybe soon I will be relieved of this disadvantage.
> Maybe, once I've left, I'll be able to be in many places at the same time.
> Maybe then I'll be able to collaborate with Archimedes and Euclid."
>
> http://theory.cs.uchicago.edu/erdos.html
>
> http://www.oakland.edu/enp/
>
> P.S. I am a math groupie, not a mathmatician. I like brainy guys.
>
>
> Marc Geddes wrote:
>
> --- "Hughes, James J."
> wrote:
>
> > If Singularitarianism wants to paint a truly
> > attractive future, and not
> > one that simply fans the flames of Luddism, then it
> > has to put equality
> > and social security in the foreground and not as a
> > dismissive
> > afterthought. To his credit Moravec, in Robot,
> > argues for a
> > universalization of Social Security as a response to
> > human structural
> > unemployment caused by robot proliferation. Marshall
> > Brain reached the
> > same conclusion, and several of the principals at
> > the IEET and I are
> > supporters of the concept of a Basic Income
> > Guarantee. But since this
> > would require state intervention I suspect you don't
> > favor such a
> > proposal, which is why you advocate(d) minarchist
> > solutions like
> > universal stock ownership in the Singularity.
> >
>
> Well, as you know Dr J, I started out a moderate
> Libertarian but was eventually converted to your
> view-point - turns out I was a closet left-liberal
> after all. It's now hard for me to believe that I
> could ever have given Libertarianism any credence.
> There's just *so* much empirical evidence against it.
> Also read your book, thought it was quite good.
> Dealt very well with the political side of things.
>
> Libertarianism deals in idealizations which bear
> little relation to human nature as is (communism had
> the same problem). Mostly it takes a hugely
> over-optimistic view of human nature - Libertarians
> like to believe that they're super-competent and
> super-rational - that their winners. But then I
> realized - hey wait a minute - what if I'm not one of
> the winners? What if I'm not a super-genius?
>
> Like me...thinking I could take on Eliezer and code up
> Singularity in a couple of weeks...
>
> I kept reaching for the math skills to blast 'em (the
> SL4 crowd), but where my math skills should have been
> there was just a hole :-(
>
> I kept reaching and reaching for the math to blast all
> the world's baddies but the post-human skills are just
> not there :-( It's a horrible horrible feeling.
>
> We all want to be X-Men - it's a really cool fantasy -
> unfortunately for most of us - the reality is pretty
> dire - we're weak and stupid :-(
>
> After a few years of intense effort I've managed to
> achieve a sort of bizarre 'partial awakening' as
> regards mathematics... a sort of pseudo-post-human
> awareness as it were...but only after intense effort
> and only at great psychological cost. That's why you
> see me 'freaking out' in some of my earlier posts...
> it's possible to SEE mathematical entities.... I mean
> REALLY see them... something analogous to the other
> senses (like looking at a chair for instance)... a
> modality. Again, I don't think this is recommended
> for most people... it's a *freaky* thing... all I can
> say when I see math is f**k! Not recommended for
> humans at all mate, no way :-(
>
> As to the Singularity: I just wish we could hurry past
> all the boring mathematics and get to the cool
> butt-kicking of all the world's baddies ;)
>
> Knock out the corporations, knock out the dictators,
> set up global democracy (but only on issues affecting
> every-one, like the economy). Let's have strong
> global democracy for the issues affecting every-one,
> but strong individual rights for individual issues - a
> person's right to control their own mind and body
> should NOT be subject to democratic vote. That's the
> system I expect the FAI to set up at Singularity.
>
>
>
>
>
>
>
>
>
>
> ____________________________________________________
> On Yahoo!7
> Messenger - Make free PC-to-PC calls to your friends overseas.
> http://au.messenger.yahoo.com
>
> _______________________________________________
> wta-talk mailing list
> wta-talk at transhumanism.org
> http://www.transhumanism.org/mailman/listinfo/wta-talk
>
>
>
> ---------------------------------
> Yahoo! Messenger with Voice. PC-to-Phone calls for ridiculously low rates.
> _______________________________________________
> wta-talk mailing list
> wta-talk at transhumanism.org
> http://www.transhumanism.org/mailman/listinfo/wta-talk
>

_______________________________________________
wta-talk mailing list
wta-talk at transhumanism.org
http://www.transhumanism.org/mailman/listinfo/wta-talk


		
---------------------------------
New Yahoo! Messenger with Voice. Call regular phones from your PC and save big.



Eugen Leitl Sat Apr 1 09:15:13 BST 2006

On Fri, Mar 31, 2006 at 02:43:22PM -0300, Marcelo Rinesi wrote:

> I *want* uploads (and AI, and the whole enchilada). That's why I want a
> good, hard look at them before we plug them on. The possible side

You want a good, hard look. Somebody else won't have that luxury.
"They" will just "plug them on". Wheee.

> effects of them -as with any technology- *cannot* be an excuse not to go
> ahead.

AI and whole body/brain emulation is the mother of all disruptive technologies.
You may want to regulate them -- but you won't be able to, if they don't
want to be regulated.

And don't make another mistake -- if there's a war, we will lose.

We don't have that much choice, unfortunately.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820            http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



Marc Geddes Sat Apr 1 11:18:27 BST 2006

On 4/1/06, Hughes, James J. <james.hughes at trincoll.edu> wrote:
>
> > I agree with you that this is a potential problem, but rather
> > than rely on a monolithic government to legislate our way out
> > of them (which has well known problems of its own), I will
> > suggest that this is exactly the sort of thing my Domain
> > Protection idea is designed for:
> >
> > http://homepage.eircom.net/~russell12/dp.html
>
> As I understand your proposal Russell, it is that we would ask the
> world-controlling Friendly AI to set up regions that are not allowed to
> interfere with one another, one for uploads and one for ur-humans.



Russell's proposal is just Libertarianism enacted as FAI.  *sigh*

Really.  This political blindness of these high IQ types is a good
illustrations for you Dr J.  HIgh IQ is no garantee of rationality at all.
And look at the lengths these guys go to to rationalize their positions.
Some of these guys were tougher than others, but I saw through them all in
the end.  Even Eli got nailed by my intution eventually.  Poor fellas.



Robin Hanson Sat Apr 1 13:00:35 BST 2006

At 03:15 AM 4/1/2006, Eugen* Leitl wrote:
>AI and whole body/brain emulation is the mother of all disruptive 
>technologies.
>You may want to regulate them -- but you won't be able to, if they don't
>want to be regulated.

That is of course another good reason for first analyzing low 
regulation scenarios.
As one tries to use regulation to move further and further away from those
scenarios, the stronger become the incentives to get around the regulation, and
so the more Draconian the monitoring and enforcement process must become.
For that reason it seems hard for me to imagine successfully raising upload
wages to be more than ten times what unregulated wages would be.   Unregulated
wages could be say $1/yr, putting the upper limit of regulated at say $10/yr.
So there seems no way to escape upload wages being very low by today's
standards.



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




Hughes, James J. Sat Apr 1 14:52:45 BST 2006

> As one tries to use regulation to move further and further 
> away from those scenarios, the stronger become the incentives 
> to get around the regulation, and so the more Draconian the 
> monitoring and enforcement process must become.

Right now, around the world, there are many countries that have
slavery/involuntary servitude, and within the North there are many
employers who evade minimum wage laws by paying in cash, or who have
unsafe work conditions, or who coerce workers to do illegal things. Lots
of people evade paying taxes, and lots of people commit crimes. But the
solution is not to simply give up on the notion of law and the
regulation of the labor market. Its to strengthen the regulatory
capacity and efficacy of the state.

The limits on making the state stronger in a democracy are the
willingness to pay for the costs of the regulation, and the tolerance
for the impositions on liberty and privacy. This is where I think we
should be creatively imagining - and I'm sure many already are - ways
that the cybernetics and information tools, and eventually Ais, can
detect crime without imposing high regulatory costs. The balance between
law enforcement and liberty will still be problem however. 

For instance, like most Connecticut residents, I exceed the speed limit
every day driving back and forth to work. But I've only gotten two
speeding tickets in the last ten years.  To actually enforce speed laws
effectively would, with cops, take an order of magnitude more traffic
cops hidden behind berms on the side of the road. No one can afford
that. If we had a smart highway and smart cars, or even if each car had
a GPS tracker, we could easily detect speeding and automatically impose
fines, and some states have experimented with auto-speed tracking lasers
that capture license plates and mail out fines.

So, if truly effective traffic law enforcement was cheap, the question
before the public would be whether they really wanted to have those laws
enforcing those speeds. I suspect that if really enforced traffic laws
we would raise the speed limit to the usual 80 mph on the CT highway. Or
we would keep it the same, the state coffers would fill with fines, and
there would be fewer highway deaths. Either way, it's a democratic
choice.

This is the situation we face now with all the potentially apocalyptic
threats - e.g. are we willing to create the regulatory and police
apparatuses to ensure that we don't end up cracked in a future dawn by
runaway AI(s) and uploads. If the kinds of surveillance and prevention
it will take prevent apocalyptic risks are "Draconian" then hopefully we
can have a public debate about what the trade-offs are between security
and risk. At least the cost of surveillance and enforcement should come
down however, making the consideration of effective surveillance and
enforcement fiscally acceptable. 

Of course I say that after the US has just bankrupted itself and
weakened domestic liberty on the pretext of suppressing terrorism and
chasing chimerical weapons of mass destruction, while actually
generating terrorism and seeing nuclear proliferation continue
unchecked. So I grant the capacity of democracies to destroy liberties
and spend inordinate sums on law enforcement unwisely. Maybe a Friendly
AI-on-a-leash would help us make better decisions.

J.




Robin Hanson Sat Apr 1 17:43:39 BST 2006

At 08:52 AM 4/1/2006, James Hughes wrote:
> > As one tries to use regulation to move further and further
> > away from those scenarios, the stronger become the incentives
> > to get around the regulation, and so the more Draconian the
> > monitoring and enforcement process must become.
>
>... This is the situation we face now with all the potentially apocalyptic
>threats - e.g. are we willing to create the regulatory and police
>apparatuses to ensure that we don't end up cracked in a future dawn by
>runaway AI(s) and uploads. If the kinds of surveillance and prevention
>it will take prevent apocalyptic risks are "Draconian" then hopefully we
>can have a public debate about what the trade-offs are between security
>and risk. At least the cost of surveillance and enforcement should come
>down however, making the consideration of effective surveillance and
>enforcement fiscally acceptable.

Imagine that the hardware cost of supporting another upload is $1/yr, but
that regulation has increased the legal wage to $100/yr.  Upload John
Smith is thinking of starting a new business whose main expense is
10,000 employees.   The costs of this business are then $1,000,000/yr
if done by the book.  John could instead create 10,000 copies of himself
to run the business, in which case his costs would be $10,000, plus
whatever it takes to hide the computers running his uploads.  This would
clearly be extremely tempting to John.

Presumably John's copies of himself are not going to complain about the
arrangement.   So to prevent this one might need to inspect every computer
capable of running an upload at at anything close to the efficiency 
of computers
designed to run uploads, to make sure they aren't running hidden uploads.
Alternatively one might need accurate ways to estimate the number of people
that must be needed to produce any given product or service.   And one
would have to prevent the existence of "free wage zones," so global governance
would be required.




Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




Hughes, James J. Sat Apr 1 17:53:45 BST 2006

> one
> would have to prevent the existence of "free wage zones," so 
> global governance would be required.

Here we agree.

J. 




Dawn Amato Sat Apr 1 17:58:39 BST 2006

I agree with Eugen.  The singularity is not some nice gentle dip around the mery-go-round...it'
s Mr. Toad's Wild Ride.  And we will pick what's left of humanity and how we define the concept of h
umanity up on the other side, dust it off and go "wow". We will be changed.
   
  A war between who?  And who will win?  You lost me here and I did read Marcelo post but wasn't sur
e how you were referencing it.
   
  Have a great day Eugen
   
  D. Amato

Eugen Leitl <eugen at leitl.org> wrote:
  On Fri, Mar 31, 2006 at 02:43:22PM -0300, Marcelo Rinesi wrote:

> I *want* uploads (and AI, and the whole enchilada). That's why I want a
> good, hard look at them before we plug them on. The possible side

You want a good, hard look. Somebody else won't have that luxury.
"They" will just "plug them on". Wheee.

> effects of them -as with any technology- *cannot* be an excuse not to go
> ahead.

AI and whole body/brain emulation is the mother of all disruptive technologies.
You may want to regulate them -- but you won't be able to, if they don't
want to be regulated.

And don't make another mistake -- if there's a war, we will lose.

We don't have that much choice, unfortunately.

-- 
Eugen* Leitl leitl http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
_______________________________________________
wta-talk mailing list
wta-talk at transhumanism.org
http://www.transhumanism.org/mailman/listinfo/wta-talk


		
---------------------------------
Blab-away for as little as 1¢/min. Make  PC-to-Phone Calls using Yahoo! Messenger with Voice.



Jef Allbright Sun Apr 2 18:29:07 BST 2006

On 3/31/06, Dawn Amato <blaseparrot at yahoo.com> wrote:
>
> When I say "rein in" Samantha, I am refering to the fact (as pretty much
> proved by ALL of history) that human beings more often then not are apt to
> use each other for personal gain or profit with one of the participants
> coming out as the loser. To deny this fact is to live in some la-la nirvana
> cloud dream.  Will human nature change substantially just because some
> people can now think faster and are smarter?  What guarantees do you offer
> that human nature will evolve along with human math abilities? Are very
> smart people kinder then less smart people? I don't see any evidence of
> this, in fact, the politics and in-fighting at Universities where all our
> supposedly smartest people congregate can be cutthroat and down right
> lethal.
>
>
"Are very smart people kinder than less smart people?"  As you've pointed
out, there's plenty of evidence that being smart does not necessarily mean
being nice.  But this observation can be misleading for at least two
reasons:  (1) one can be smart but lacking broad awareness, and (2) some
actions are highly moral but not considered kind.

We too often think of "smart" in terms of sharp and narrow "brilliance"
rather than the broad wisdom that involves awareness of consequences over an
expanding scope of time, interactions and interactees.

We too often think of "kindness" in terms of the "niceness" of the immediate
interaction, and often fail to appreciate the deeper morality of a parent
disciplining a child (who doesn't appreciate it at all), a leader taking
unpopular action for a greater goal,  harsh action executed in self-defense,
and so on.  What all such examples have in common is broader awareness.

So, are very smart people kinder than less smart people?  No, often they are
not.

Are very aware people more moral than less aware people?  Yes, increasing
with the extent of their awareness of their shared values and awareness of
principles of action that work.

We could really use an effective scalable framework for growth of such
awareness.

- Jef
http://www.jefallbright.net
Increasing awareness for increasing morality

HOMEJOURNAL TABLE of CONTENTS EDITORIAL BOARD | AUTHOR INFO | JOURNAL HISTORY

© 2005 Journal of Evolution and Technology.  All Rights Reserved.

Published by the Institute for Ethics and Emerging Technologies

Mailing Address: James Hughes Ph.D., Williams 229B, Trinity College, 300 Summit St., Hartford CT 06106 USA


ISSN: 1541-0099  Reprint and © Permissions