Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
United States

The Net As New Jerusalem, Part Two 130

If Victor Frankenstein were running around 21st century America, he wouldn't have to hide in a tower with the monster. He'd be in Silicon Valley, lunching with venture capitalists or counting his bio-tech shares and planning the move to shiny new offices. If the Net is, in fact, the New Jerusalem, it needs a different kind of politics, especially the kind that begin with an ethical and moral purpose. Some ideas of mine follow; please add yours.

(Second in a series)

"I have worked hard for nearly two years, for the sole purpose of infusing life into the human body," wrote Victor Frankenstein in Mary Shelley's great novel. "For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart."

In our time, Victor Frankenstein would be in Silicon Valley, taking one meeting after another with venture capitalists, angling for a profile in Wired, wrangling tens of millions for his new company, lifeinthebody.com (based in Cambridge, Mass.), beginning commercial licensing of the discoveries of the Human Genome Project.

In contemporary America, Victor wouldn't have to hole up in a remote tower far from human observation. He could partner up with somebody out in the open and promise to create perfectly engineered babies, cure cancer, and stop aging. The venture capitalists would be drooling all over him.

Ethical morasses lie at the heart of modern-day technology, increasingly run by highly-educated, wealthy elites who have little awareness that everybody isn't as technologically-inclined, -equipped or advanced as they are. In this Jerusalem, half the country is still outside the gates.

But perhaps these ethical quandries could form the foundation for a new kind of ethical and rational politics that addresses social divisions among the techno haves and have-nots, the future of gene mapping, intellectual property questions, the use of nano-technologies, the creation of ubiquitous and expensive technologies that are poorly designed or environmentally damaging, the intrusion of government. As in Victor Frankenstein's time, we hear little public or civic discussion about these choices. They haven't surfaced during the presidential campaign and debates. They get crowded off front pages and TV newscasts by hype-laden stories about dotcom greed, crackers and sexual predators online.

We need an ethical framework for technology, and while I'm not a technologist, I'm happy to start the discussion by suggesting some opening questions to ask about developments technological.

Do we need it?

Can we support it? Can the people who buy or use what we make get free, readily-available help?

Are new technologies open to peer review and scrutiny, that is, are the software, hardware, systems and design of new technologies available for public and other inspection in order to root out potential mistakes, problems and flaws?

Will everyone have equal access to new technologies, or will they become the property of corporate and social elites with specialized knowledge and lots of disposable income?

Do new technologies have unintended consequences? Have academic, business or civic analysts examined them? Have their ramifications been explained to the people affected (as in telling Victor Frankenstein's neighbors that a monster would soon be running around the community?).

Can technologies be created with consideration both for the environment and for consumer's convenience? Can batteries, parts, cartridges, support and service be standardized, so that consumers don't have to continuously scramble? Can software and computer makers agree on ethical standards for their product's lifespan, so that people who invest in expensive technologies can be assured that they will last a few years, and that products and software for them will be available in the future?

Can the sale and licensing of gene research to private bio-tech corporations be halted until critical social issues can be discussed and resolved? The public has yet to grasp the consequences of such researching falling into the hands of a few corporations, lulled as they are by scientific and political promises of cures for cancer, aging and heart disease.

Is downloading music or a novel theft? Do the ethics of copyright and intellectual property need reconsideration? Or elimination? Is there a more rational alternative to the Sonny Bono and Digital Millennium Copyright Acts?

How can we ensure that technology and software companies and Web sites prominently disclose privacy provisions and implications? It ought to be illegal to distribute people's personal information with their knowledge and permission.

Perhaps we should require that before new technologies are licensed, deployed or sold, we need a technological impact statement. Like the environmental statements designed to make people aware that their surroundings could be affected by construction or research projects, a TIS would mean that before projects like the gene map are sold and distributed, ordinary people are aware of the technology and its possible impact on their lives and those of their children.

This discussion has been archived. No new comments can be posted.

The New Jerusalem (Two)

Comments Filter:
  • by Anonymous Coward

    Anything which we, as a decent Christian nation, can implement to ensure that the standards of morality are upheld in our children, will be 100% wholeheartedly supported by me and many other concerned parents. We need to take a step back from the current crop of violent, exploitative technological "advances" and remember that our worth as human beings comes not in the amount of gadgets we accrue, but in how closely we follow the teachings of our lord Jesus Christ.


  • I always thought that the thing that the internet really needs to realize its potential is a declaration by the world's leaders (especially the US) that it is a kind of seperate pseudo-reality that does not recognize political borders. The internet could be treated like international waters.

  • by thenerd ( 3254 ) on Friday November 10, 2000 @07:14AM (#631416) Homepage
    Having just read the article, I'm gripped in a post-katz feeling of hysteria and paranoia. Should I be?

    No, I don't think I should. Silicon Valley isn't the new anything. A lot of people have tech jobs there. A lot of people have other sorts of jobs. Yes, technology has got to the point where we can pirate. Just because we can doesn't mean the courts are forced not to prosecute us, as we are still responsible for our actions. Yes, gene research and biotechnology will have ramifications. Six months after an outcry, there'll be legislation (or at least there'll be lawyers).

    While things change, it all stays the same really. Yes, technology impacts on things, but yes, the courts will catch up. We're not above the law. There's nothing stopping me going to kill someone, much like there's nothing stopping me pirating software, or invading somebodies privacy. The difference is that the crime of murder has been around a lot longer.

    It's too easy to think the world will suddently melt down, if you read too much stuff from the net. (Like this article). Whether we need it, or want it, we'll get legislation. Nothing slides too much without being nailed down. We pay lawyers too much.

    thenerd.
  • This wasn't ever necessary before, was it?

    Ok, maybe someone shouted Fire! a long time ago, but we are part-way through an information revolution.

    Besides, who really knows the tru status quo at any given time?

  • The modern world is changing. We've seen plenty of /. articles about it. Does Mr. Katz really think that he must repeat the obvious? Everything in his article I've seen before. Katz should go visit Slashdot [slashdot.org] a news for nerds site that seems to beat him to the punch just about every single time. Maybe he does go there, and just steals all his material... Either way, it was a dumb article.
  • Don't look for it to happen any time soon. Many of the world's leaders can probably barely understand e-mail, let alone the Internet. Clinton may be the most Internet-savvy President to date, if for no other reason then looking for hot Asian babes, but he's still scraping the bottom of the barrel. (In more ways then one.)

    To have the Internet treated like international waters means that it would exist outside all countries' laws and therefore be immune to certain scare tactics from lawyers and such. The problem lies in the fact that the servers that house web-based content are not physically located more then 12 miles off the shore of every country. They are within the bounds, and therefore the laws, of a country. I cannot see the U.S. or any other country for that matter, ceding most legal rights to any dink with a server. Furthermore, I do not want to see extra-legal powers for AOL, as given them the same status as international waters would do. (Instead of dealing with U.S. law, they would get to go to the U.N., that bastion of common sense and understanding.)

    Just my 2 shekels.

    Kierthos
  • by Dragoness Eclectic ( 244826 ) on Friday November 10, 2000 @07:20AM (#631420)
    Does this article actually define a problem and propose a solution, or does it merely pose a serious of intriguingly vague questions?

    Answer: no, there is no problem actually defined here, though the tenor of the questions implies that the reader is supposed to believe there is one--though not exactly what it is. The imagery and emotional hot-buttons pushed through Katz's choice of phrases have a vague neo-Luddite, Naderite ring to them, which floats away in the swamp of unanswered questions.

    The answer to the questions is: Yes, and Maybe.

    At the bottom we have a nice "perhaps":

    "Perhaps we should require that before new technologies are licensed, deployed or sold, we need a technological impact statement. Like the environmental statements designed to make people aware that their surroundings could be affected by construction or research projects, a TIS would mean that before projects like the gene map are sold and distributed, ordinary people are aware of the technology and its possible impact on their lives and those of their children."

    Translated:
    "Perhaps we should have more government paperwork required before any innovative business is allowed to do anything, so that more bureaucrats can make a living, getting a power trip from saying "no", holding their hands out for bribes--er, campaign contributions--, and so that established business have yet another legal roadblock they can use to squash competition, and so that any fringe group that doesn't approve of your politics can use the process to shut you down regardless of the actual merits of your product or business."

    Reality check: the problem with your "perhaps" is the same old one: WHO DECIDES? Who decides whether my product or business is permissable? Do you really want to hand over to a government body or political group or ANYONE AT ALL the power to FORBID you to research or invent something new?

    (Dragoness hands Jon Katz a copy of "Atlas Shrugged", and crawls back into her lair.)
  • How is it different and why? Are telephone calls "a kind of pseudo-reality?" They don't recognize political borders very well either, but they're regulated in your country if you're lucky.
  • by plastickiwi ( 170800 ) on Friday November 10, 2000 @07:22AM (#631422)
    Lest we forget the import of the subtitle of Shelley's novel, we should consider that its central theme is consequence. Victor gets slapped down for playing in God's domain, but his final words are "I have been blasted in these hopes, but another may yet succeed." His death is the consequence of exploring the unknown (it's relevant that he dies on a ship exploring the polar wastes), but the genie of his work is out of the bottle. Captain Walton leaves the pole with Victor's notes in hand, pondering the implications of what he now knows to be possible.

    Humanity's most precious possession is our most terrible curse: memory. We're adept at freeing genies from bottles, but inept at putting them back...or leaving them there even if we do manage to reimprison them.

    It's easy to say we shouldn't pursue a technology that's not sustainable, not clean, not fair in the consequences it will bring into the world...but who are "we"? Certainly Jon Katz and I can agree not to dabble in things that will harm our neighbors or make the world less hospitable for our descendents, but will everyone else? When "we" say "The consequences of Technology X are not acceptable," how do we prevent "not-we" from having, and acting upon, a different opinion?

    Lots of third world nations find it awfully suspicious that the major industrial powers are trying to limit CO2 emissions just when industrialization is starting to benefit the little guy. Sure, we know things now about the effects of CO2 on the environment that we didn't before, but how much comfort is that knowledge to the starving peasant who could have benefited from manufactured goods, but whose government has been bullied into signing an agreement not to use technologies damaging to the environment? How do "we" weigh a .001% greater chance of skin cancer for everyone in the world against the quality of life of a few million? How do we make amends for decisions of this nature that we've already made, and that we continue to make to this day?

  • This is just another way to bitch about the gap between rich and poor, and not a particularly novel one at that.

    The real money is in solutions to these problems, which are in dreadfully short supply, unlike rants such as this one. Give me a break.

  • As this nanotech future becomes closer I think our vision of it will become clearer. The disparities that exist among nations are not caused by natural forces which, once conquered by technology will lead to parity for all. Disparities in food supply, raw materials for industry, and resources for technology development exist because access to and use of these resources is governed by socio-political factors (profit potential, internal and external politics, nationalism, etc.). These resources are not governed by distribution models based on need, conscience, or rationality. Amartya Sen, who won the Nobel Prize in economics in 1998, has demonstrated that events such as famines are not caused by a lack of food production, but are caused by interruptions to or restrictions on the distribution network. Knowledge (the most important component of technological development) is no different.

    When these technologies are developed and deployed they will meet the same fate as all technological innovation (unless we act to make a difference): Their distribution will be based on profit potential, politics, nationalism, etc. These technologies will not be distributed to the developing world, unless the developing world can suddenly afford to pay for them. And don't think AI will be able to solve any of these problems either. Since the root causes of our human misery (famine, poverty, disease, ignorance) are caused by us, an advanced AI will have as much trouble dealing with them as we do.

    We know how to treat water so it does not transmit disease. We produce more than enough food to feed the people in the world. Medicines are available to prevent or treat the diseases that ravage many third-world countries. These problems (disease, hunger, ignorance) all have solutions that have been proven to work; so why do these situations continue to exist? They exist because we allow them to exist, they exist because our system of distributing these technologies has not kept pace with the technologies themselves.

    In order for technology's greatest benefits to make an impact on the overall human condition we have to change the way our civilization operates. Unless we change our thinking, all the technology in the world will not save us.

    We need to modify the distribution system for this new technology to really benefit humanity as a whole, and not just enrich the few already wealthy and powerful. How can we hope to get the benefits of 21st century ideas and innovation with 18th century distribution models?

  • by Shivetya ( 243324 ) on Friday November 10, 2000 @07:25AM (#631425) Homepage Journal
    Too many of your "Can we....." are simply the result of technological advances and the desire for their continued occurence. We live in a world that is rapidly leaving a lot of people behind.

    However, you fail to realize that there are indeed a great many people just as happy to be left behind as those who must be on the cutting edge.

    Can we protect, standardize, legitamize, or organize everything or mostly everything.

    NO.

    Should we.

    NO.

    Fact is, most of people in this group who want to insulate, protect, coddle, smother, suffocate, and control are for the most part clotting egomaniacs who don't think anyone can think for themselves. Your own text tells me you are part of this elitist crowd which believes it must think for the poor underprivledge people.

    There are cases where help is generally needed, but a lot of the world would cease having a reason to progress, let alone live, if someone was constantly speaking in their head "don't step on the grass", "thats not your money", "tell the policeman everything", "those thoughts are evil"

    too hell with your world Jon, I prefer to have a chance to fuck up my own life as well as a chance to make it work.
  • by Infonaut ( 96956 ) <infonaut@gmail.com> on Friday November 10, 2000 @07:26AM (#631426) Homepage Journal
    Uh, Jon, I hate to tell you this, but the whole "progress" thing has been going on for a while, and it ain't predictable.

    In the early stages of the automobile, there were hundreds of manufacturers in the US, and lots of unsafe cars. Now there are the Big Three and cars are much safer, but do you think that during the early stages of the industry anyone could possibly have predicted what the automobile would become? In the early stages of any new technology, it's really rather impossible to predict future uses or outcomes.

    Dynamite was supposed to render wars a thing of the past, due to its vast destructive power. I'll bet if you polled leading "experts" and concerned citizens at the time of its creation, most would have agreed with that prognosis.

    My point is that it would be wonderful if we could truly understand the impact of new technologies before their introduction into society, but there are so many variables (human behavior, economic trends, interaction with other technologies, invaders from Mars...) that it's just not feasible to come to any real conclusions about the impact of a technology, other than the really obvious, immediate effects.

    You seem to be saying that we should innovate in accordance within the framework of a vast plan, which is contrary to how innovation works best.

  • I know this may sound childish... But: What if we (as in "internet users") declared it independent? Yes, individual servers are physically located in a physical country. However, information doesn't have to. A very good example is if you XOR a message with random data, and store the random data, and the result of the XOR, on two different hosts on the net. Both of them contain as much information as the other, and independent of each other, they both are useless random data... We have the technical tools to defend such a country (e.g. freenet). We seem to ahve the people (the free software community is quite a good start, but there are quite a lot of other communities that I think (c|w)ould gather up)... What I mean, is a declaration of independence, including a basic set of rights and obligations of the sitizens of the net.
  • IMHO, one of the most important contributions of the Free Software Movement is its ethics. People do things for reasons other than money. They share their work with others freely. There's the concept of The Right Thing (rarely agreed on, but it's there). It's OK to eat and even become pretty rich, but those things aren't the focus; there's even an aversion to those for whom it is the focus (suits). And it's voluntary.

    If you think about it, this is the reverse of the problem that Jon started his article with. Lots of r&d is being done not because it's the right thing, but because there may be a great market for it (some researchers, I know, do things for the cool factor w/o thinking of impacts. Ah, well). I think that if we can export the ethics of Free Software to people, this will do the job.

    The concept of "Zion" shows up in a few places other than the Matrix :) (and in most religions, not much like the concept in the Matrix). I'm really surpised sometimes by how some of the ethics of the Free Software movement match my conception of Zion in Mormon theology and thought. If you don't mind reading about the concept with a Mormon/Christian bent, you may want to look at the following:

    A Storyteller in Zion, Orson Scott Card
    Approaching Zion, Hugh Nibley (especially the chapter entitled "Work We Must But The Lunch Is Free")

    (And, yes, there are plenty of differences between the Mormon and Free Software communities. But that doesn't mean something can't be learned by looking around).
  • The problem that I see is that everyone seems to be taking the thing out of context.
    The internet is no bigger than a combination of the telephone ("poll"/"pull" technology) and television ("push" technology). (Which admitedly were big). It's what I call an "enabling technology".
    It's no more a technological panacea for all modern ills than the telephone was in its day.
    One lesson we can learn from TV as a "push" technology is that, as in the USA now, a very low signal to noise ration medium evolves.

    However, for the subject of ethics, there are intant parallels that could be drawn, but seem to be overlooked. OK, "wrongs" can be had/done, but...
    A kid can view www.pr0n.jp, yeah, but he can phone 0047980084102398 for "big bosomed bimbos".

    People can download a copy of "American Pie II, The Cherry's Revenge" illegally and ftp it to mates. However, he can set his dish to pick up the feed _to_ regional TV stations, and therefore get Babylon 5 intended for broadcast the next day not only early, but with all the adverts removed.

    I could go on.
    However, that would make me as boring as Jon Katz.
    Why do I reply to his threads, they just wind me up.

    FatPhil
  • ...everybody isn't as technologically-inclined, -equipped or advanced as they are. In this Jerusalem, half the country is still outside the gates.

    Would you please explain why this is such a bad thing? How do you justify your arguments for social engineering?

    Like it or not Jon, the very idea of progess means that some ideas are left behind in favor of others. This also means, that some people are left behind in favor of others.

    No elite group of society can answer the questions you raise in your article. It is folly to think otherwise. If people have the freedom to choose, the end result will be the best.

    Most of your suggestions are just another example of liberal/communist social engineering.

  • The Ethics you propose misses an important aspect of what is peculiar about the West as a civilization. Unlike other civilizations, the West has never met a power increasing technology it didn't like. That is why neither China nor Islam rule the world today. Both were at different times the frontrunners, but their leaders were scared of uncontrollable power and backed off.

    Frankenstein's inability to back off is thus a very apt metaphore.

    Can we change course now? I don't know and the odds seem against it.

    Is it worth it? I am not sure. I am sure that sooner or later the West will meet the tragic end that awaits all positive feedback loop. But the opposite of tragedy is not necessarily a happy ending, it may well be just a slow fade out.

  • Will everyone have equal access to new technologies, or will they become the property of corporate and social elites with specialized knowledge and lots of disposable income?

    Lets look at some other countries. I have a big dilemma about whether to get a Palm 3 regular or color, and am I going to WAP enable my website. People in 3rd world countries have problems like, people with aids thinking they can cure themselves by having sex with a virgin, lack of food, plague. It is already with the social elites.


    Perhaps we should require that before new technologies are licensed, deployed or sold, we need a technological impact statement.

    I think that this is a GREAT idea. Not only in the tech sector but how about cloning, gene selection, abortion, etc etc. If we hade some law or code that established a set of ethics or rules before we unveiled a new technology, that would be a great way to ensure proper use of the technology. Is it feasible, no. I don't think so because other countries would develop the technology of our discoveries, people are not that patient, and we live in a CAPITALIST society. If it makes money then it survives, this idea would have to be a governmentally employed system, and then the gov. would be restricted the release of technology -not a good thing, to say the least!

    I think that most of the questions that Katz presents could be solved/answered with open sourcing. Someone comes out with a new hard drive, release all the tech specs and let the public hack through it. The person could still retain copyright. This would make copyrights much more specific and not allow the huge vague copyrights that are plaguing us now.
  • Your real question is whether we can behave like enlightened humans and share with each other. Can we be free to create without the eventual downsides that follow....greed, callousness, and finally evil.

    I think the answer is yes we can.

    Even though I am a pessimist first and foremost. I still believe that we can over come our desire to be richer and more powerful than others, and get down to the basics of life, what our parents taught us : "share with others". My parents taught me this and I'm trying to teach it to my children.

    Not to say that it won't be a long and arduous journey, but we WILL get there. Hopefully before I die.


  • The problem with this view is that it assumes the objects themselves contain some kind of higher moral impact when they come into existance, which simply isn't true. The questions you're asking are naive, because what you posit is simply nonsensical. How do you assess the impact of a new technology? Perhaps we should let Science Fiction authors write these "technological impact reports". While it's true that there are implications to our new technologies we've already turned the corner on this years ago. We've reached the point where in order to get these new technologies (read concepts) under control we must embrace our own ability to dream up new ideas to deal with the dangers. We're entering a time where the cycle between envisioning a new idea and turning that dream into reality happens faster and faster. With the advent of the Internet our universe now changes at a rate of hundreds of times a day. And as Napster and other technologies have shown, often it outstrips our conceptual frameworks in the most surprising ways. Cyberpunk was all fun and games until Gibson's vision of a universe where technology as an organism in the hands of the people became real. As this election is showing us we've entered a time of deep division between those on the street and those in the ivory towers and how the use and respond to technology. The revolution won't just be televised, it will be television itself...
  • Certainly Jon Katz and I can agree not to dabble in things that will harm our neighbors or make the world less hospitable for our descendents, but will everyone else?

    Other than a utilitarian philosophy asking us to restrict ourselves to protect our environment, what incentive do individuals have to *not* act in their own self-interest in such matters? None. Legislation at a national level is obviously not a solution, but an oppressive local gov't isn't any better, just less well-armed.

  • But since it seems like we are going to have someone lesser than a malignant carbon rod as president, can we expect taco to leave the country along with Baldwin and Cher now? And what about Katz? and Streisand?

    If you guys are leaving the country, don't let the door whack you on the butt on the way out.
  • by Andy_R ( 114137 ) on Friday November 10, 2000 @07:34AM (#631437) Homepage Journal
    Ok, let's look at a specific 20th c. invention and see if this approach would have worked...

    How about the Laser? Katz's Inquision bans it because it's a death ray. Who at the time of the Laser's invention realised it would end up in every home as an integral part of the CD player? No-one. You simply can't guess the implications of an invention 100% accurately.

  • Maybe we could pay him to be a speech writer for Bush. Then, we could "accidently" shoot down their plane sometime, and take out two birds with one stone (or maybe that would be two stones with one bird, or better yet two stones with one stone)

  • A gothic horror novel is probably not the fairest analogy to use when examining the evils and ethics of technology. This is a novel designed to horrify and play with emotions -- and Shelley did not necessarily intend to make social commentary on technology.

    That said, if someone is going to speculate what Dr. Frankenstein would do now, one has to wonder what would happen to Dr. Frankenstein after the novel completes. (Disclaimer: it's been 10 years since I last read Frankenstein.)

    Would Dr. Frankenstein swear off science for life? Would he take his knowledge about infusing dead flesh with life and use it to cure gangrene? Would he continue his research, and if so, would the neighbouring families leave him alone or stone him to death?

    See, the tricky part about science is that it's difficult to throw ethics onto something when you don't know what you can do. Once you know it can be done, then it's easier to see applications of this, which helps to determine the ethics of the situation.

    Is this ideal? No, probably not. But is it any more ideal to stop all scientific investigation until we can guess all the possible moral ramifications, and then determine if we should proceed with research, only to determine that we can't do it at all, or that we can only do the things we don't want to do, or that we have to do the things we don't want to do for years before we can research the thing we DO want to do? (Excuse my run-on sentence.)

    I put forth the classic example of splitting the atom, which has many possible applications, some of which help people, and some of which kill people.

    Incidentally, if JonKatz postulates the theory that a modern Dr. Frankenstein would hook up with a VC and create any harmful thing he wanted to, then I'd like to mention that a modern VC would also cut funding and support when the neighbours family sued Dr. Frankenstein for 1.2 billion in punitive damages.

  • Whut?

    [snip]...a real aversion to intelligent discussion of issues.

    Now hold on just one second! Personally, I have found many a good conversation going on here at /.
    Why, just the other day I had sat down at my Beowulf [beowulf.org] cluster with a nice toasty pants full of hot grits [satansbreath.com] to discuss relativity with Natalie Portman [natalie-portman.net].
    Granted, it can be difficult to hold a conversation with a waif he is not only emaciated and vapid, but petrified, but I tell you, we were solving the mysteries [enterprisemission.com] of the UNIVERSE, I say!

    /me sits back to watch all my karma fly away...

  • My experience in following /. is that intelligent discussion is the exception, not the rule.
    Whether you agree with Katz or not, his articles are thought provoking. Which is why people hate him so much, as actual thought really takes effort and evidently is painful to some people here.
  • You would prefer a communist like Katz dictating what you can and can not buy? Or how much you are allowed to sell your services for?

    From the other angle, a group of elite socialists telling you you have to submit to blood samples so the government may have your genetic profile on record. Oh, by the way, we might need to use some of your genetic material for our national gene bank.

  • by under_score ( 65824 ) <mishkin.berteig@com> on Friday November 10, 2000 @07:49AM (#631443) Homepage
    Katz has brought up some good points, but I think he left unsaid one of the most important: specific technologies are not neutral in their effects on society, the environment, and the economy. Often misunderstood, Marshall McLuhan (a Canadian BTW), said "the medium is the message". This has been continually misunderstood as "the medium affects the message". But in fact, any medium (read technology) fundamentally has a message which is its effects on society, the environment and the economy. As a simple example, television is not possible without some pretty serious infrastructure: studios, transmission systems and receivers. This infrastructure costs a huge amount of money to create and maintain so television can never be used effectively by people or groups without money. Not only that, but despite a bit of unpopularity of the concept of globalism, technologies now have an immediate global effect (Linux would not be as far as it is today without globalism). Ignoring this effect is arrogance of the most dispicable kind, and is common among corporations (the dark underbelly of globalism). I have been writing an essay on this topic of a moral and social framework for analyzing technologies. It is still very much in progress, and there are parts that are sounding a bit old, but for what its worth, here it is [berteig.org].
  • Jon, I really want you to take some time off and think long and hard about what you are going to write about next.

    Until you have something fresh and interesting to say, go fishing or something and get some inspiration. You've spent the last six months flogging the same old issues, just slightly rehashed so as to constitute a different checksum than the article before.

  • It ought to be illegal to distribute people's personal information with their knowledge and permission.

    Eh?

  • by ichimunki ( 194887 ) on Friday November 10, 2000 @07:58AM (#631446)
    To respond to this post and to the parent at hopefully one swell foop: Since the physical world is rife with artificial borders created by groups called nations and dependent rarely upon any distinguishable or significant physical boundary nor dependent on any real separative qualities of the persons confined to the geographical space possessed by those boundaries (most nations have at least one ethnic minority), why would you expect a transnational group of infotech "haves" to be able to concert any effort to declare their infotech to be no longer subservient to their individual nation of origin (since they must or want to obey their nation's will in all other matters)?

    The best we can hope is that it will become more and more obvious just how artificial those real world national boundaries are, and that rather than regroup along ethnic or ideological splinter lines that all humans will understand their commonalities and work to abolish those sorts of constraints. Once that happens maybe we can talk about independence or freedom as personal liberties and what exactly that will mean for us all as citizens of Earth. Not just on the internet, but in all of life, which is the only way the internet or information will ever be truly free.
  • Cool. Name calling rom Taufiq and Rand waving from the Dragoness.

    Point is, imo, it is more likely that you'll hve to submit your blood samples, etc. under a right wing govt. than otherwise. For our own good. I can't tell you how many nice normal people think that suspending unreasonable search and seizure rights is okay in the name of the war on drugs.

    Sure, it's been a Democratic administration behind Carnivore, but don't tell me Republicans don't want exactly the same thing.
  • Yeah, I know that there is no chance of something like that happening, I just wanted to be an optimist for a moment. :)

    I don't know if you know about this, but the smallest nation in the world is not the Vatican, it's an nation called Sealand that is actually a former British air base. And they've decided to become a haven for internet privacy. Here's a quick description of the situation...

    http://www.alternet.org/story.ht ml? StoryID=9315 [alternet.org]

    "The handful of crypto activists living in the world's smallest country is prepared for a blockade. Driven by a passion for Internet privacy, they've brought enough food, water and fuel for a year and moved to Sealand, a 25-yard-long steel and concrete former World War II fortress six miles from the English coast. In 1967, an eccentric former British major named Roy Bates declared Sealand a sovereign territory, eventually issuing his own stamps, flag and currency. Forty-four years later, Bates, now the crown prince of Sealand, has leased his island to a group of techno-libertarians and their start-up, a data sanctuary called HavenCo which promises cyber security and which may affront many of the world's major nations."

  • there is also one question that needs to be asked:

    Can we stop it?

  • Katz has the nerve to call himself an individualist, and writes this:

    Perhaps we should require that before new technologies are licensed, deployed or sold, we need a technological impact statement. Like the environmental statements designed to make people aware that their surroundings could be affected by construction or research projects, a TIS would mean that before projects like the gene map are sold and distributed, ordinary people are aware of the technology and its possible impact on their lives and those of their children.

    This is not individualist. And it is not pro-technology. I can hardly think of a better way to ensure that technological development is restricted to the well-financed and well-connected.

    Shame on you, Katz, you big poser.
  • That said, if someone is going to speculate what Dr. Frankenstein would do now, one has to wonder what would happen to Dr. Frankenstein after the novel completes.

    He would decompose. He dies at the end. ;-)

  • Yes, this is pretty much the same old thing, and Phloighd is right about the money (and the Good, if I may) being in solutions.

    But who has the impetus to look for solutions unless the conversation brings up the idea that the solutions need to be looked for?
  • > Do we need it?
    No, we don't need anything outside of Maslow's
    heiarchy of needs, the basic things required to
    support relatively normative human life. If
    technology helps us fill one of those needs,
    perhaps it can be needed.

    > Can we support it? Can the people who buy or
    > use what we make get free, readily-available
    > help?
    Who says they're entitled to free,
    readily-available help? A whole industry has
    arisen around support and training services. Can
    you become an ASE certified mechanic without
    paying a dime? I don't think so. Does that mean
    you need to be at a mechanics level to drive a
    car? Not at all.

    It's up to the educational system to step up to
    the plate and add new technologies to the
    curriculum, as has happened for hundreds of years.
    Older folks may have to pay a bit more to get this
    new knowledge, but they won't die if they don't.

    > Are new technologies open to peer review and
    > scrutiny, that is, are the software, hardware,
    > systems and design of new technologies
    > available for public and other inspection
    > in order to root out potential mistakes,
    > problems and flaws?

    That's what QA is for. I believe in Free Software
    and Open Source as much as anyone. You want to
    find flaws? Do like the do with cars... slam them
    into brick walls at 40mph. Or get Underwriter's
    Labs or someone to test.

    > Will everyone have equal access to new
    > technologies, or will they become the property
    > of corporate and social elites with specialized
    > knowledge and lots of disposable income?

    I don't see this as a problem. Not everyone can
    afford a car. Or a telephone. Or a house. Or a
    decent meal. This digital divide stuff is crap.
    What, the "haves" should give stuff to the "have
    nots" because it would be more fair since the
    "have nots" didn't have the same opportunities?

    My parents, and their parents, worked damn hard,
    sacrificed, to provide each following generation a
    better life than their own. I will do the same for
    my children. I will not do the same for someone
    else's offspring. That's their responsibility.

    At this point in my life do I give to charity? No.
    Will I in the future, when I have a secure
    financial situation? Sure. Everyone deserves to
    have their basic needs met. People don't "need"
    to live in a mansion, drive a porchse, own a
    computer, access the internet, or have a
    genetically engineered pet that's a perfect
    companion and teacher for their children.

    Those that do are lucky, they have advantages.
    But to claim that having advantages isn't fair
    is tantamount to claiming life's not fair, that
    everyone should be equal.

    The only way everyone will be equal, physically,
    mentally, financially, would be if everyone was
    dead.

    > Do new technologies have unintended
    > consequences?
    That's kind of silly. They always do.

    > Have academic, business or civic analysts
    > examined them? Have their ramifications been
    > explained to the people affected (as in
    > telling Victor Frankenstein's neighbors that a
    > monster would soon be running around the
    > community?).

    The ramifications can't be known until the
    technologies have been in widespread use for a
    number of years. Medical technology will have
    to be tested of course, which means guinea pigs.
    Heck, we're still trying to guess what the
    ramifications of the integrated circuit have been,
    and will be.

    > Can technologies be created with consideration
    > both for the environment and for consumer's
    > convenience?

    Yes, if consumers demand it and are willing to
    pay for it.

    > Can batteries, parts, cartridges,
    > support and service be standardized, so that
    > consumers don't have to continuously scramble?

    Consumers scramble because they want the best.
    I won't begrudge and incompatibility for an edge
    in functionality. So the new Boo Bah Battery will
    only work in a Boo Bah Camcorder. It gets 3x as
    much life as any other cordless camcorder. If
    people find value in it, they will buy it.

    > Can software and computer makers agree on
    > ethical standards for their product's lifespan,
    > so that people who invest in expensive
    > technologies can be assured that they will last
    > a few years, and that products and software for
    > them will be available in the future?

    Heh. No. Probably never. They're in business to
    make money. That involves giving people what they
    want, and charging them out the arse for it.
    Get over it already.

    > Can the sale and licensing of gene research to
    > private bio-tech corporations be halted until
    > critical social issues can be discussed and
    > resolved? The public has yet to grasp the
    > consequences of such researching falling into
    > the hands of a few corporations, lulled as they
    > are by scientific and political promises of
    > cures for cancer, aging and heart disease.

    Sure, but that means gene research will never
    go anywhere. If you think we can come to a
    resolution about the moral and ethical
    implications of gene research, you're living in
    a very different country than I am.

    > Is downloading music or a novel theft? Do the
    > ethics of copyright and intellectual property
    > need reconsideration? Or elimination? Is there
    > a more rational alternative to the Sonny
    > Bono and Digital Millennium Copyright Acts?

    I have problems with the idea of intellectual
    property only so far is to goes to cover patents
    on processes or algorithms. I don't find any
    problems with a creative work, like a novel or
    a piece of music.

    Do I like the RIAA? No. Do I like many of the
    artists who, unfortunately, find themselves in a
    position to be employed by them to support their
    craft? Absolutely.

    I think the micropayment model, the MP3.com model
    and the commission model are going to be the
    funding methods for most future art.

    The problem with the RIAA and even the MPAA, is
    that they, in large part, have a monopoly right
    now.

    > How can we ensure that technology and software
    > companies and Web sites prominently disclose
    > privacy provisions and implications? It ought
    > to be illegal to distribute people's personal
    > information with their knowledge and permission.

    It's simple. If you don't want your personal,
    public information to be distributed, don't give
    it out!!!! No one's forcing you to give real.com
    or whoever it is correct information.

    > Perhaps we should require that before new
    > technologies are licensed, deployed or sold,
    > we need a technological impact statement. Like
    > the environmental statements designed to make
    > people aware that their surroundings could be
    > affected by construction or research projects,
    > a TIS would mean that before projects like the
    > gene map are sold and distributed, ordinary
    > people are aware of the technology and its
    > possible impact on their lives and those of
    > their children.

    Ok, this would put a halt on all future progress.
    Do you really think that "ordinary" people will
    ever be fully aware of technology and it's
    possible impact? People fear what they don't
    understand, and often times people don't
    understand new things.

    Monumentous change is forced upon humanity at
    regular intervals. Adapt, or become a fossil.
  • Do new technologies have unintended consequences?

    I think this is perhaps the only question that is posed that has any real importance. All of the other questions are just extrapolations of this question to different isolated situations or subjective points of view.

    If you view technology as simply being human progression into the ability to facilitate our needs and desires then obviously any progression is going to result in a consequence that hasn't been considered to it's full extent. This arises from the fact that as we construct new and better ways of doing things we also change ourselves!

    Humanity is not an isolated entity and it is not static in time. We constantly interact with our environment and with ourselves to bring about development on every scale from the individual to the global community. So in the end I believe we will never realize the full extent to which our actions (in the form of technology) play out and as such I believe that 'technology ethics' should not be a static value, they are something that we must constantly bring into question. The biggest problem with the current state of affairs is that people try to apply outdated ethics to technological problems. What most people don't realize is that these ethics are being used in contexts that they were never initially intended for.

    Another key problem is that for our technology to be evaluated properly we must also have a society that can appreciate the things that truly matter like the environment and true freedom instead of short term economic gains and power struggles. It will be interesting to see if we can manage to evolve as a society quick enough to keep up with the pace of our technology - otherwise you won't have to watch movies about the Matrix and whatnot... you'll be living it.

  • ...then we'd better stock up on firewall software. Right now, Jerusalem is the battleground between Christendom and Islam (neither of which I condone), and bullets and stones are flying all over the place.

    Yes, the term "New Jerusalem" is pointing out the fact that the Internet unites this motley crew that is the human race into one large melting pot. However, I fear the reactions that will inevitably take place which will produce disastrous results, much like combining a base with an acid, matter with antimatter, and time with antitime.

  • I was trying for humourous, but, I guess it was kinda troll like. Sigh, I am becoming like those I criticise.
  • An Island Fortress of Internet Privacy - http://www.alternet.org/story.html?StoryID=9315 [alternet.org]

    Here is the first paragraph...

    "The handful of crypto activists living in the world's smallest country is prepared for a blockade. Driven by a passion for Internet privacy, they've brought enough food, water and fuel for a year and moved to Sealand, a 25-yard-long steel and concrete former World War II fortress six miles from the English coast. In 1967, an eccentric former British major named Roy Bates declared Sealand a sovereign territory, eventually issuing his own stamps, flag and currency. Forty-four years later, Bates, now the crown prince of Sealand, has leased his island to a group of techno-libertarians and their start-up, a data sanctuary called HavenCo which promises cyber security and which may affront many of the world's major nations."

    I thought this was very interesting; a possiblity that I never considered before.

  • by Anonymous Coward
    Quite frankly that would take alien intervention.

    I used to dream of a world without national borders, but as I grow older it's becoming more and more obvious that we've gone nowhere for the last several millenia. My highly educated colleagues routinely speak in derogatory terms of the members of other race, religion or sexual orientation and hinder their progression in the society. As if I, born in the country, am somehow more worthy of a job or apartment. Assholes. Ironically, I've said this to their face and still I am considered better than someone who might have immigrated only from a couple of miles outside our national borders. And then they get mad when I won't bash the "foreigner" on the basis on how they talk or how they won't "adapt" (=lose everything that reminds of their original culture) to our culture.

    I just read a book of Aldrich Ames and the more I think about it the more I am convinced that it would serve this asshole nation right if I ever managed to betray it in the most damaging way to "her" worst enemy. People would get hurt? Who cares if they choose to serve such an idiotic cause?

  • by Anonymous Coward
    Hoyas, schmoyas. Hear this!


    A mol iz geven drai Indians. Di mama, Pocayenta, der tate, Geronowitz, un di tokhter, Minihorowitz. Ein tog, kumt aheim Minihorowitz un zogt, "Mama, ikh vil heretn!"

    "Heretn! 'siz shoyn tsait! Du bist yetst an alte moid! Zekhtsen yor alt! Ver iz der bokher?"

    "Oy, Mama, hob ikh getrofn mit a bokher! Shtark, heldish, ..."

    "Vos iz zain nomen?"

    "Sitting Bulvon."

    "Vos far a yikhus hot er?"

    "Zeyn tate iz Meshigine Ferd, der gantser macher fun di Shvartsfus tribe."

    "Oy, veln mir hobn a khasene! Ale di Shvartsfus, ale di Shmohawks, un di gantse mishpokhe... Oy oy oy, mir hobn ein tsore!"

    "Vos iz di mer?"

    "Di tsipi iz nisht groys genuk far ale di gestn fun khasene. Geronowitz! Geronowitz, shtey af dem tukhes un gei krig far mir a buffalo!"

    "Farvus vilstu a buffalo?"

    "Mitn fleish fun buffalo, ken ikh makhen a gut gedempte buffalo tsimis. Un mitn pelts ken ikh makhen groyser di tsipi, un mir veln kenen ainleidn di gantse velt tsum khasene!"

    Geit avek Geronowitz. Ein tog. Tsvei tog. Nisht ken Geronowitz. A vokh shpeter, kumt aheim Geronowitz, mit gornisht in zeyn hent.

    "Shlemeil! Vu iz mayn buffalo?" zogt Pocayente.

    "Du un dayn buffalo tsimis! Ikh hob eikh beide in bod!"

    "Vos iz di mer?"

    "Ershtn tog, hob ikh gezen a buffalo. Nit groys genuk far tsipi, nit gut genug far tsimis. Tsveitn tug, hob ikh gezen an andere buffalo. Gut genuk, groys genuk, ober mit aza
    farfoilte pelts! A mieskayt fun a buffalo, hob ikh keinmol nit gezen! A por mer teg, hob ikh gezen an andere buffalo. Groys genuk, gut genuk, a perfect buffalo!"

    "Nu, vuden?"

    "Vuden? Bin ikh gegangn tsu shokhetn de buffalo. Hob ikh gekukt in mayn tash, un Goyishe Kop! Ikh hob genumen mit mir di milkhedike tomahawk!"

  • by Andy_R ( 114137 ) on Friday November 10, 2000 @08:16AM (#631460) Homepage Journal
    Here in Britain, we have already have strict 'ethical' controls on genetic research, especially where human embryos are concerned.

    What happened? The researchers moved abroad and carried on as normal.

    No matter how 'unethical' any kind of research is, there is always going to be some jurisdiction willing to reap the possible financial rewards.

    As soon as there is a big financial reward to all the opiate chemistry research going on in South America, or all that germ warfare research in going on in Iraq, or the skin colour specific toxin research that was rumoured to be going on under the old South African regieme, let's see how quickly we throw away our moral stance and cash in.

    Am I scaremongering? No. I'm living in a city that was trashed by Nazi V2 bombs in the second world war. Where did all the bomb researchers go after the war? They set up a little thing called NASA.

    See my point?

    Does the ethics police weigh up the peaceful results of the space programme (microchips, for example) and decide the deaths in London were worth it?

  • we can??? Im not so sure about that, remember back a couple hundred years when they tried all that Utopia stuff? Capitalism should work if we could. Some people just have differnt levels of "nice". The reason america works so well is because everybody does whats best for themselves and that works out to be the best for the whole. (capitalism, democracy).

    Maybe Im just too much of a pessimist.
  • by WombatControl ( 74685 ) on Friday November 10, 2000 @08:18AM (#631462)

    ...this one takes the cake...

    Perhaps we should require that before new technologies are licensed, deployed or sold, we need a technological impact statement. Like the environmental statements designed to make people aware that their surroundings could be affected by construction or research projects, a TIS would mean that before projects like the gene map are sold and distributed, ordinary people are aware of the technology and its possible impact on their lives and those of their children.

    The fact is, we can never predict where technology will take us with any degree of accuracy. Could the Wright Bros have said, "well, in fifty years, our little POS contraption will develop into hypersonic spy planes that could observe the Soviet Union, so maybe we should keep this secret as to not upset the balance of power."

    Could Tim Berners-Lee have said "gee, this little Web thing could be used to distribute pornography, so maybe I should keep it secret for a while longer."

    Could Rob Malda have predicted that Slashdot would end up overloading web servers world-wide?

    The answer to all is "no." To mandate some kind of TIS would not only be impossible, but it would be dangerous. We can't predict the course of technology, we can only adapt to it. Even without technology, life adapts, otherwise it wouldn't currently be around. Let me put it succinctly:

    The greatest risk is not taking one at all.

  • The truth comes nowhere close to being as black and white to how either of you are putting it. I think that the Guardian (a British newspaper) put it best in an article entitled 'two just causes at odds with one another'. In this latest intifada, I would say that the Israelis are the more guilty party, as they have often been before, but that the Palestinians have not all been innocent victims either (although for many of the casualties, that is undeniably the case).

    The recent origins of the current conflict can be traced back in particular to two events: the failure of the Palestinian authorities to ensure that Jewish holy sites under their care were treated with respect, and Arafat's continued brinksmanship in response to the most generous offers yet made by an Israeli premier, made at considerable risk to his own political position. In his uncompromising rejection of these terms, Arafat forced the end of that round of peace talks and pushed Israel's government to a more hardline stance.

    But neither action can in any wise be used to justify the high level of Palestinian civilian casualties or the outright declaration of martial law with regard to Arab-Israeli citizens, which has extended to the house arrest of tens of thousands of civilians because of their ethnic status and the use of air strikes against the political opposition.

  • You're right. It does sound childish. And extremely naif. The technical problem of figuring out what's on those servers often has a solution that's also technical. Don't let the person sleep--most other ways are nastier--until she gets you what you want.

    There are no technical solutions that make people free 'cause freedom is a political thing, not a technical thing. The sooner people get it out of their heads that some magical whizzy toy rather than hard work and sacrifice get and keep freedom, the better.
  • by sulli ( 195030 ) on Friday November 10, 2000 @08:24AM (#631465) Journal
    Well, if you want to spend all your time trying to get permission to do things in technology, like a developer dealing with the planning commission, vote for this idea. Of course you'll end up with another Permit Raj, and technological innovation will grind to a halt, but the bureucrats won't mind, because their jobs will be secure.

    Yuck.

  • by loose_change ( 196779 ) on Friday November 10, 2000 @08:25AM (#631466) Homepage

    I'm not declaring that I get the point, but I seem to have a different response to this article.

    It seems to me that Katz' point isn't so much to declare we should regulate and control new technologies. Instead, Katz' understated point is that given the wide-range of information exchange available on the internet, can we have more civic debate on these issues based on better mutual understanding?

    New Jerusalem, indeed.

    I like the ideal of the internet as a distributed version of the public square where ideas are exchanged. In many ways, it should be the ideal forum for discussion and consensus building on just such issues as Katz raises here. One might hope that a tech guy could talk to a farmer and each get some clue about how policies and technologies effect each other's lives, just to grab a random thought-example.

    But that isn't what happens on the net, and it isn't usually what happens in a public square. People of like mind tend to band together and reinforce their own opinions. We ghetto-ize ourselves on line (even without push technology) in similar fashions.

    Yes, there will always be the "should we, just because we can?" argument, just as there will always be the Darwinian response. The thing is -- and I think this thought lies behind much of Katz' writing -- how much we may be (culturally and materially) sacrificing our long-term survival for short-term gains?

  • I feel your pain. On the plus side, my wife and I will be able to ring in the new millenium, without going through the suffering of paying outragesley high prices for stuff.
  • enough already Katz. your analogy with the biblical phrase "New Jerusalem" is EXTREMELY flawed. Have we undergone an apocalypse? Has the Day of Judgement come and gone and I just slept through it? If not, then stop saying we're approaching the "New Jerusalem".

    for the record, the phrase "new Jerusalem" was stated twice, both in Revelations. chapters 3 and 21.

    from chapter 3: 12: He who conquers, I will make him a pillar in the temple of my God; never shall he go out of it, and I will write on him the name of my God, and the name of the city of my God, the new Jerusalem which comes down from my God out of heaven, and my own new name.

    from 21: 1: Then I saw a new heaven and a new earth; for the first heaven and the first earth had passed away, and the sea was no more.

    2: And I saw the holy city, new Jerusalem, coming down out of heaven from God, prepared as a bride adorned for her husband;

    3: and I heard a loud voice from the throne saying, "Behold, the dwelling of God is with men. He will dwell with them, and they shall be his people, and God himself will be with them;

    4: he will wipe away every tear from their eyes, and death shall be no more, neither shall there be mourning nor crying nor pain any more, for the former things have passed away."

    5: And he who sat upon the throne said, "Behold, I make all things new." Also he said, "Write this, for these words are trustworthy and true."

    so Katz, please tell me how this is a halfway respectable analogy? are you just trying to throw in a smattering of apocrophia to get us all a bit more neurotic and worked up?

    the only possible way this analogy works is if you firmly believe not that the genome project is a bad thing, but that god is now among us, leading us along in giving us the power of perfect creation and bringing back that which is dead (extinct species). if that is your logic in using this analogy, then your above argument is complete astinine since you're arguing AGAINST your analogy. By implying that the Net might be the "New Jerusalem", you are sanctifying the Net since in the New Jerusalem, God is among us.
  • by Anonymous Coward
    A while back I saw Bill Joy in person at the Harvard Kennedy School of Government. He was "touring" as a follow-up to the Wired article he had written. I would have a hard time calling him a Luddite, and he seemed pretty spooked by what he saw ahead.

    He didn't preach any answers, as I doubt there are really any concrete ones. Morality is a system of blurry lines, with faith in the uncertain right answer. He seemed to suggest that the days of searching for the truth, without also thinking of the social, ethical, and other ramifications of such a search, are over. If you extrapolate out the amount of violence towards fellow human beings, the availability of information, and the destructive capability of technology that is closely related to information, things do not look good.

    Think about this. People often release a virus on a computer because they can, just to show that the exploit is there. With biotech, people might release a virus that "runs" on the human body, just because it is possible, to show that the exploit exists. The information necessary to accomplish such a feat might be readily accessable from a dorm room, and all it would take is *one* disaffected, dejected individual with the inclination and know-how to do it. How about twenty 100% fatal modifications of the Influenza virus, all released at once?

    In the question/answer period afterwards, someone asked about using technology to combat technology. He answered that historically, destructive technologies almost always predate and outpower defensive ones, such as the bullet before the bullet-proof vest, or the atomic bomb before "Star Wars".

    He didn't have any suggestions for a quick remedy, but he expressed interest in dialogue. He suggested having people from many disciplines, including scientists, business people, teachers, religious leaders, and politicians to discuss a way to get technology back into some human context. He felt that the arrogance of some technologists, who think that laypeople can't understand the technology, and therefore shouldn't have any say in its development or use, might reexamine their position.

    Overall, it was very eye-opening. I left with a deep respect for him for bringing these issues out into the open, when other "leaders" in the industry are more concerned with their own bottom line than the consequences of the process in which they participate.

    If you subscribe to the almost prevalent theme these days that humanity is doomed, then by all means, continue on your current path. We are all responsible for the current and future states of humanity, and it will take a new level of awareness and consciousness to avert the almost certain disaster that awaits. The excuse that you give, and that I give, will give rise to the excuse that the next Dr. Frankenstein gives to himself when undergoing an experiment that shows just how weak our weakest link is. Love is the only answer, the only currency, and the only hope, so love the next person as you wish you could yourself. Please don't use my imperfection as an excuse to avoid your conscience. And if this language makes you uncomfortable, then I suggest you reexamine your position.
  • Perhaps of New Jerusalim, it should be thought as New Bablyon? The similarities between the tower of bable and the internet is really amazing.
  • You'll get no argument from me on this one. Too bad you posted as AC, since a lot of readers might not see this valuable point.
  • Freedom is not something you get. Its something you take. And you have to continue fighting to keep it.

    And when it comes to figuring out whats on the servers, we have laws. Laws that most of the time says you are to be regarded as not guilty until proven guilty. And with some technical solutions (like freenet), you can not prove that a person actually posted some information, that the person actually downloaded it, or delete it from the net (without shutting down _all_ servers). On freenet, if I put a document there, you don't like, you can put a pistol to my head, there is no way I can remove it anyway...

    There are technical solutions to get freedom. Those are hard work. Freedom is political, but politics can be pure technical if technitians start argue.
  • Too bad i don't have mod priviledges right now, because if i ever saw a perfect example of "Offtopic", this is it.

    You're diverting the discussion from the argument to the semantics, which is akin to attacking an idea based upon its grammar.

    And by replying, i'm helping you do it. Egads! The abyss has gazed too long into me! Monster, I have become! Cursed was the day of your birth, yea, and mine as well!
  • Technologies in and of themselves are not ethical or not. They are good or bad depending on the context of how they are used. That lack of objective context gets to the core issue with group ethical control of technologies and their development.

    Ethics are an individual thing. Choices about whether to pursue technologies or ideas are made by people who understand them or the ideas behind them enough that they see some potential new utility.

    In contrast, institutionalized ethics as a group activity in the form of ethics boards and impact committes seem less useful. Medical ethicists argue over what's ethical here and now and later but those views are fluid, depending on their institutional affiliation, their government, and the exigencies of the moment (and likely the grant money available). It's simply too much to believe that technologists in other areas would be much different.

    It's interesting to note that a product like Thalidomide that basically became taboo in the 50s has now been rehabilitated for exploration and use in different medical contexts 50 years later. Presumably we have a deeper and presumably more mature understanding of what it does and how it works but that implies that someone was poking around with it in a different context.

    Technologies tend to be pursued whether they are ethically blessed or not. Technologies like cheap virus cookers and nuclear ballistic missles are pursued and perfected by governments and willing individuals as well as any other group with the means and desire to make a controlling social impact. The only way to prevent (dangerous) technologies of this sort from spreading is to keep them secret. But the chances of any given technology being kept hidden seem slim to none given that

    • Governments aren't particularly secure organizations expecially in democratic countries, where everything outs eventually if only for budgetary reasons,
    • Serendipity and synchronicity among different researchers and groups will occur anyway (who invented calculus: Newton or Liebnitz? who invented the computer: Babbage or Mauchly or Atanasoff?) and
    • Private intellectual property rights are merely legally protective in nature and don't prevent others from reinventing a similar solution if they want to spend the time and energy.

    Finally, it seems particulary naive to think that all consequences of a technology will be determinable and evaluatable in advance by some well meaning ethics body. Nature will have its way and humans are far too creative to restrict themselves from exploring new or related ideas. Unintended consequences seems a pretty good descriptor for documenting quite a few of the major discoveries and disasters throughout history. Trying to eliminate them seems a fruitless task at best. Better still to try to foresee the nastier ones and work agressively to forestall them or blunt their effect before they turn from consequences into disasters. But that seems a dim hope as well.

    Personally, it gets back to individuals making smart choices about what's ethical to pursue, what's ethical to share and when, and whether a particular technology (warhead design, nanobots, gene sequencing, AI) is basically beneficial or basically malevolent in the broader set of social contexts within which we work.

  • intended to explore the ethics and evils of technology. That was it's whole frickin' POINT.

    In fact, its genesis was a coffehouse converstion about just these issues.

    The fact that it was a gothic novel is almost incidental. That was meerly the medium which Shelly chose to expose the ideas to the widest reading public.

    KFG
  • Saying that a technology shouldn't be explored until the ethical questions are resolved is the same as saying that the technology should be suppressed forever. There are still people who question the morality of vaccination for example: it is applied unequally to rich and poor; it subvert's God's plan for who lives and who dies; it could be an Evil Conspiracy to infect people with diseases. Shall we call a moratorium on vaccination until every last one of these people is convinced?

    If you read arguments against genetic engineering by someone like Jeremy Rifkin, they exactly parallel arguments against vaccination. He thinks we should wait until all issues are resolved. But that implies that moral issues are like scientific facts; something that sufficient study and observation can clarify.

    In fact moral issues in the real world are usually only resolved through the death of one faction or the other. We know slavery is bad now because all slave owners are dead and buried and no one holds that viewpoint anymore. But no "proof" of the evilness of slavery has been discovered that was unknown in the 1700's. The abolition of slavery was accomplished simply by pushing ahead with it despite objections and letting ex-slaveowners whine about it until they died off.

    Moral issues can only be resolved after very thorough and widespread implementation, not before!

  • WHOSE ethics and morals, and how do intend to impose them on those that do not agree with you?

    The internet is a tool for directly connecting all of humanity. That means it connects all of humanities ethics and morals.

    That means, * that the internet is only capable of the group ethics and morals of the mass of humanity.*

    In other words, it is by its very nature a NULL factor.

    KFG
  • What you are proposing in your thoughts is some kind of technological Utopia.

    Hrmmm.

    Interesting read tho, I'm busy dreaming on right now...
  • read the last paragraph. i am not.
  • In the early stages of the automobile, there were hundreds of manufacturers in the US, and lots of unsafe cars. Now there are the Big Three and cars are much safer, but do you think that during the early stages of the industry anyone could possibly have predicted what the automobile would become?

    Ah, but the safety of the modern automobile is due to outside activism. People like Ralph Nader monitored the developers of the technology and forced them to include safety features not because the average consumer was clamoring for them, but because they were necessary for public well-being.

    We need external monitors for the new technologies as well. It was relatively easy to engineer seat belts, air bags, structural reinforcements, etc. after the cars had been around for 50 years, but developing safeguards for biotech after it has been released isn't guaranteed to be so easy.

    Let's monitor and take precautions so that we're:
    1. very sure we want the genie out of the bottle
    2. very sure of what the genie's going to do once it's out

    3. and/or
    4. able to put the genie back in the bottle


    With proper monitoring, we don't have to be Kreskins, but the monitoring process, combined with careful ethical analysis will make our predictions better. Just because our predictions can't be 100% accurate doesn't mean we shouldn't try.
  • Yes, you are.
    (No i'm not!)
    (Yes you are!)
    (No i'm not!)
    (Yes you are!)
    (You're stupid!)
    (I know you are but what am i?)
    (Etc. I just thought i'd save us some time and carry the argument out to its logical conclusion. All in jest, sir. ;)

    You are still attacking the analogy (which really has little substantive use in this article) rather than the actual content of the article.
  • "I have worked hard for nearly two years, for the sole purpose of infusing life into the human body," wrote Victor Frankenstein in Mary Shelley's great novel. "For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart." Doesn't everyone feel this way after a big project has been completed? I think the moral of the story is that one should avoid letting minor setbacks in alpha tests knock one's self confidence. I mean, how much damage can one monster do? He wasm't even a programmer. Marios
  • Yes, totatly agree, and I claim to think Katz isn't that terribly awful.
  • This is a load of crap. Corporations predict where their technologies are going to go all the time. Bell predicted many of the long-term social effects of the popularization of the telephone _long_ before it was popular. Some companies have 50, 100 or 500 year plans (although it is not so common in the West). Want some documentation? Check out the book "In the Absence of the Sacred" by Jerry Mander - and don't let the title prejudice you: it is a very well written and thought out book (although it does go a little over the top sometimes). The whole point of science is to develop models which PREDICT the future. These models are becoming more sophisticated, and more general as we learn more and more. Are you so arrogant as to think that science cannot address the economic, environmental and even societal effects of technologies? By the way, I want to reveal my bias: I am writing a paper about exactly this issue. I believe there are specific useful areas where we can predict the effects of technology. The paper is not done, and has some parts that are getting old, but if you are interested, here it is [berteig.org].
  • Yup! Which is one of the reasons why we will never solve humanity's problems until we all understand the fundamental unity of humanity. Peace is unattainable, social good is unattainable, global prosperity is unattainable, and environmental health is unattainable until that unity is recognized, and accepted by the people of the world.
  • Ironic, that's the sort of that thing that shut down the Israeli-Palestinian summit recently.

    Barak suggested Jerusalem and other hily places should placed in the hands of a third party. Arafat correctly chose to not take his offer without offering the idea to his people. Neither side is ready to deal with that. Think of the way the US cries out about free speech and about the same thing would happen over religion.

    Course Clinton just had to rush it just to leave a mark on his way out.

  • by Anonymous Coward
    Reality check: the problem with your "perhaps" is the same old one: WHO DECIDES?

    "Design For Evil"

    Any innocent product which becomes suddenly genocidal in the hands of a tyrant has been designed by a dangerous naif. Every design process is incomplete unless it takes into careful consideration what could be done with the product by a dictatorial megalomaniac in command of a national economy, a secret police, and a large army.

    The above quote is from Bruce Sterling's Viridian Design Principles. [thehub.com.au] I am not saying that all designers must be beaten with blunt instruments until they think this way, but the world would be a better place if this kind of thinking was incorporated into the beginning of the design process.

    Who decides? Ultimately, you do. You don't live in a bubble. If you're smart enough to build the thing, you're smart enough to anticipate how it'll be abused. If you're ethical enough to worry about Jon Katz indiscriminately handing out legal coercive tools, you're also ethical enough to worry about yourself indiscriminately handing out technological coercive tools.

  • "The ethics of technology" has been a college bull session topic since forever. Unfortunately, it has never accomplished anything and never will.

    Should we "allow" technological advances? How do you propose to stop them?

    Even if we could, how do we decide which?

    How do we predict the effects of unknown technologies? After all, they're "unknown". Duh!

    Who are "we" in the preceeding questions, anyway? Any time you say "everybody agrees that ...", either you're lying or you have a very restricted definition of "everybody".

    Biotech questions seem to be being decided by legal battles between a tiny group of technophobes and international megacorporations who think that they have billions, if not trillions of dollars at stake.

    Look at the development of nuclear weapons. The peacenik types hate them, of course, but they ended the war with Japan (killing maybe 200K in Hiroshima and Nagasaki vs the estimated 1M American and 5M Japanese deaths that would have resulted from an invasion of Japan). After WWII, they prevented the Soviet Union from simply rolling over western Europe the same way they rolled over eastern Europe.

    OK now, good or bad?

    For another example, look at the Internet. Until the development of the Web, it was simply a geek toy. Once some smart guys in Switzerland (significant!) wrote a little program to make FTP easy to use, the whole thing exploded so fast that our hypothetical review body would have been left totally in the dust.

    Frankly, the only really effective checks we have on new technology are our liability laws. Hurt people and you get your arse sued off.

    --
  • The current answer to most of Jon's questions is a simple NO. Why? Because the market says so.

    The downfall of a market economy is that the Almighty Buck has the last word 99% of the time. The other 1% is when people realize that it may be cheaper, but not more environmentally friendly, or that they just LIKE doing such-and-such, or that other thing isn't available in your neck of the woods.

    Mr. Buck is an ugly master, and doesn't really care that there are monsters running around outside... they're cheaper than all the alternatives.

    "There's a party," she said,
    "We'll sing and we'll dance,
    It's come as you are."

  • Jon Katz, twice today.. enough is enough.
  • The problem with technology is that it doesn't come in black and white, or even shades of grey. Technologies only have an 'ethical dimension' when they are applied in some way (spoon technology is not intrinsically evil, but jabbing a spoon into someone's eye is often considered to be evil). The technology which mutates babies into tiny killer machines could also cure them of cancer, and so on.
    Technology itself cannot be 'ethically' evaluated (anyone study ethometrics?), only the uses of technology. Even if you did wish to block a particular technology I think such an attempt would be rather ineffective unless the development of that technology requiried funds which only a world-government could supply.
    Even the ethical evaluation of technology is more than problematic, particularly since it would require a system of government based on ethics instead of stability for both evaluation and implementation.
    The ramifications and consequences of new technologies can only be explained to those capable and willing to understand - this would require a complete revamp of our system of education. Understanding of technologies is currently limited and perforce leads to an 'elite' who understand them. I think it was Benjamin Franklin (?) who supported the idea of good education for all partly because it was necessary for an effectual democracy.
    Too make effective judgements people would not only need to be technically educated but ethically educated. Currently such education is random, slapdash and factional - the power base of most religions is that they teach the One True System of Ethics. In many countries ehtics and religions are confused. Religions will, in general, resist the creation of an accepted system of ethics partly because it would likely differ from their creed, but mostly because it would erode their power. If the negation of an ethical backdrop to events weakened western religions, what would the acceptance of secular ethics do?
    When even elementary millenia-old technologies like birth control are still hotly contested on supposed ethical grounds, how can you assume that we can wait contemporary issues to 'simmer down' to a consensus opinion.
    Marios (was that inflammatory?)
  • Maybe I have 'little awareness that everybody isn't as technologically-inclined, -equipped or advanced as' myself. Despite that possibility, it appears that the rapid growth of technology (insert cliche) or paradigm shifts coupled with the internet has made this a moot point.

    How can one hinder the rate at which technology emerges? If governments intervened when any opensource code came out saying...'This is to advanced' who here would not pissed off? The same can be said for any technology. By holding back innovation to allow the masses to catch up we ultimately squash the desire to innovate. Besides, how can you stop the application of ideas? Look at the dude (sorry forgot his name) who built the 'Super Gun' for Iraq. He had the idea of launching satellites into space with very large guns. The US refused to fund him so Hussein gave him funding to build a military version. Some would say he did a bad thing. I say he did what he had to for a passion that needed to be sated. Seriously, I would kill someone if I knew it would get us to Mars that much faster, would you?

    Unfortunately some people will always be technologically ignorant as has been the case throughout history. Rather than hindering technology, society should try and raise technological awareness through technology. (yeah, yeah catch22 but you need a flame to build a fire!

    People who see the future before it happens are driven to it like moths to the flame, perhaps blindly. Nonetheless, the momentum they create is powerful and only slowed by tremendous pressure. Galileo, Copernicus, and other giants suffered for this drive and still their names ring truer than any Pope or inquisitioner. We cannot allow the masses to make that suffering for naught.

  • "Cyberpunk in the 90s" by Bruce Sterling [pugzine.com]

    Consider FRANKENSTEIN by Mary Shelley, a wellspring of science fiction as a genre. In a cyberpunk analysis, FRANKENSTEIN is "Humanist" SF. FRANKENSTEIN promotes the romantic dictum that there are Some Things Man Was Not Meant to Know. There are no mere physical mechanisms for this higher moral law -- its workings transcend mortal understanding, it is something akin to divine will. Hubris must meet nemesis; this is simply the nature of our universe. Dr. Frankenstein commits a spine-chilling transgression, an affront against the human soul, and with memorable poetic justice, he is direly punished by his own creation, the Monster.
    Now imagine a cyberpunk version of FRANKENSTEIN. In this imaginary work, the Monster would likely be the well-funded R&D team-project of some global corporation. The Monster might well wreak bloody havoc, most likely on random passers-by. But having done so, he would never have been allowed to wander to the North Pole, uttering Byronic profundities. The Monsters of cyberpunk never vanish so conveniently. They are already loose on the streets. They are next to us. Quite likely *WE* are them. The Monster would have been copyrighted through the new genetics laws, and manufactured worldwide in many thousands. Soon the Monsters would all have lousy night jobs mopping up at fast-food restaurants.
    In the moral universe of cyberpunk, we *already* know Things We Were Not Meant To Know. Our *grandparents* knew these things; Robert Oppenheimer at Los Alamos became the Destroyer of Worlds long before we arrived on the scene. In cyberpunk, the idea that there are sacred limits to human action is simply a delusion. There are no sacred boundaries to protect us from ourselves.

    Or, to go musical, Bruce Cockburn:

    Let's hear a laugh for the man of the world

    Who thinks he can make things work
    Tried to build the New Jerusalem
    And ended up with New York
    Ha Ha Ha...
  • Corporations try to predict where there technologies are going. They aren't very successful. Human beings cannot predict the economic, environmental, and societal effects of technologies.
  • The greatest risk is not taking one at all.

    Wow, that's a nice aphorism. Did you come up with that, or do you know who did? I can't find it on ag [aphorismsgalore.com].

    --

  • I agree with previous posts, a TIS requirement is a terrible idea and would certainly put more red tape in the way. We need -less- red tape. I will develop new technology and I will use it. I will also use a lot of the technology that others like me develop. If I (or they) do something with the technology that goes against the grain what will you do about it? Sue me into oblivian? Maybe. But will that stop the technology? Hardly.

    Bottom line is if you can't keep up with the new world you become extinct. Not that you shouldn't challenge technologies you don't agree with, go ahead. But if you stumble on the way you will certainly be trampled by the flocks running after. This is life.
  • Absence of a reason not to do something, is not, alone, a reason TO do something.

    I have no reason not to run around flapping my wings and clucking. I guess I should do that then.
  • by Sodium Attack ( 194559 ) on Friday November 10, 2000 @11:48AM (#631505)
    It's an allusion to Revelation 21 [gospelcom.net]. The "new Jerusalem" has little to do with the Jerusalem we know.
  • "New Jerusalem" is an allusion to Revelation 21 [gospelcom.net]. It has little to do with today's Jerusalem.
  • You can get a login, and in your preferences you can select not to see anything he posts. Simple. Now quit your bitching!
  • Do you happen to be either a scientist or a philosopher of science? It is true that science tries to acheive repeatability of results. It also tries to falsify and verify claims. And it also tries to develop models which describe nature. Those models are not prescriptive, they are descriptive. But part of that descriptive nature is their ability to predict those repeatable results. Hmm. Prediction. If we couldn't predict the future (repeatable results), we would have no basis for building anything. Check out the work of Larry Laudan for some very interesting and convincing descriptions of what science really is. As a last point, it is interesting that you used the work "hypothesis" - look it up in a couple good dictionaries. No predicting, indeed!
  • The Net is the most powerful means to free association we've ever had. If it allows abuses, it allows reactions, which allow organization, which allows implementing checks and balances.

    It wasn't until I became destitute, last year, that I discovered I was information-rich. A million sites, vast search engines, MySQL, PHP, Wikiwiki, you know the litany. Bottom line: Any group of people can now implement any protocol and any record-keeping system they want.

    If you want to fight abuses, OK, you have to deal with the constituted power structures. Learn how to prepare a legal case, how to lobby, how to use leisure instead of money, and how to beg for money anyway. But if you're serious, you're in a far better position than any pre-Net campaigners. And note how sensitive companies can be to mere attacks on their reputations.

    Hegelian dialectic on fast-forward.

  • Uh, Ralph Nader is a private citizen, just like the rest of us. The safety of automobiles became an issue for various reasons (Nader among them) and the populace pushed the legislators to make rules. And, for the most part, this works out fairly well. There have been some hiccups along the way like mandatory air bags that for smaller folks actually increase the risk of death, but mostly things work fine.

    However, if the government would have jumped in at the beginning of the automobile revolution with legislation we would still be riding horses around.

    In other words, progress is inherently unsafe. It is impossible to know what new dangers each invention will bring. Tim Berners Lee had no idea that his invention would create the Internet Predator, he just knew that it would solve his particular problem. Pairing inventors with bureaucrats who have the power to veto ideas because they might be unsafe would stop progress altogether. Not too mention the fact that a good portion of our true progress is made while our scientists are looking for faster ways for us to kill each other. You might not like the atom bomb, for example, but there certainly are some valid uses for atomic power and radioactive isotopes. And I certainly am glad that the United States ended up with this power first instead of some other country. The world would be a different place if the Nazis would have developed this sort of weapon first. There is no way to put that particular genie back in the bottle, but there is little evidence that the world isn't better off because of it's existence.

    Now I agree that there should be "monitors," but I don't think that the government is likely to do a good job. Fortunately each and every one of us has the same power that Nader has. We can each speak up about the abuses that go on around us. There is nothing magical about what Nader did. And there is no guarantee that you could elect or appoint someone to do the same job. In fact, the second you have a designated whistle blower the whole process opens itself up to politics and corruption.

  • In the early stages of the automobile, there were hundreds of manufacturers in the US, and lots of unsafe cars. Now there are the Big Three and cars are much safer, but do you think that during the early stages of the industry anyone could possibly have predicted what the automobile would become?

    That depends on what you're claiming it has become. If it's that the car would be the primary mode of transport for a large percent of the population in industialised countries, then that's exactly what Henry Ford said it wanted it to become.

    IMHO the car is a perfect example of technology gone wrong. In cities it is the dominant mode of transport. People drive to and fro by themselves in cars capable of carrying 5 or more, at about 50 kmh in cars capable of doing 180, and turning a finite resource into pollution at an alarming rate. The car is more bloated than Netscape 6, it is completely unsuitable for the majority of its tasks.

    I did see in a newspaper last week some concept cars developed by Peugot and (another manufacturer) - small, fibreglass, two seater and electric. Of course you'd be mad to drive one in case some daydreamer in a 1.5 tonne, 5 litre four wheel drive on their way to the corner store ran over you.

    I wish more people *had* thought about what they were doing with the bloody card.

    [Disclaimer: I ride a push bike]
  • Indeed.

    For instance, television is one-to-many information distribution. Benefit: a lot of people didn't have such good access to information before TV came along. Downside: those people have no way of putting information back into the system, it's one-way only. Message: the role of the public is to consume only, on every level.

    Now, let's look at the Internet. Benefit: a lot of people now have two-way communication, even to the point that they can get 'slashdotted', their ideas given massive media exposure in various ways. Downside: this communication can be and is being spied on, recorded, censored, controlled and manipulated by powerful entities both in government and private industry. It is becoming very natural to be spied on, controlled, manipulated by outside forces with no accountability. This is taken as natural and desirable by most.

    Half a point to anyone who can spot _that_ message (as spelled out in classic literature). Hint: the concept of government we're accustomed to is being steadily replaced by this new concept, illustrating a significant shift in the world's power balance.

  • And progress is not intelligently planned
    It's the facade of our heritage
    The odor of our land
    They speak of progress
    In red, white and blue
    It's the structure of the future
    As demise comes seething through
    It's progress 'til there's nothing left to gain
    As the dearth of new ideas
    Makes us wallow in our shame
    So before you go to contribute more
    To the destruction of this world you adore
    Remember life on earth is but a flash of dawn
    And we're all part of it as the day rolls on
    And progress is a message that we send
    One step closer to the future
    One inch closer to the end
    I say progress is a synonym of time
    We are all aware of it but it's nothing we refine
    And progress is a debt we all must pay
    It's convenience we all cherish
    It's pollution we disdain
    And the cutting edge is dulling
    Too many folks to plow through
    Just keep your fuckin' distance
    And it can't include you!
    -Bad Religion, 1989
  • Dear Mr. Katz

    Your statements are in principle wrong. And terribly wrong. Sincerly, you are trying to put moral on things that gives a Hell on what humans may think about it.

    In part I am a technocrat so you may blame me for this. But also I know the value and disvalue of many scientific and technological advances from a human point of view. Science and Technology are unhuman in their inner nature. They do not depend on you, me, the government or corporations. Whatever humans do to find a new law or create a new invention does not give these things a human character. It is quite unfortunate that Earth is probably too isolated from the rest of the Galaxy. i believe that if we had two/three neighbors we would have a more clear picture on how a wheel, a car or a computer would look too external to a human mind.

    You talk about responsability on technology. What makes you think that technology should be responsable. It is humans who should be responsable. They either make nuclear stations that blast off, or nukes that bomb cities. The technology is mostly the same for both things and the human irresponsability goes nearly the same level for both cases. A criminal case of playing with matches to see how things burn. But note that the problem is not on the matches. The problem is on how you use them. Pease weigh every point of nuclear technology and tell me. Should we forbid it? Yes? Cool, then we should have forbidden dynamite, cannon powder, and even fire. Why our ancesters didn't see it? One makes a fire to roast a chicken and another to cook some niggers/white necks/red skins/gooks/reds/russkies/gringos/yankees/fritz/.. . However the predatory nature is not on the fire but rigt inside of that piece of white material inside of our skulls. So much for the danger of technology. Technology is technology. The human values are what is in question. Do not humanize technology but yourself. Frankenstein was not wrong by reviving a human. No, he was wrong by trying to do it at all price, reverting all his human values, commiting crimes and more hideous doings to achieve his goals. If you read carefully the novel then you would note exactly this. His creation did not blame him for his revival. He blamed him for the moral fall and the fact that once the goal achieved he refused to accept it. Because it would mean that he accepted all the monstruosity of what he had done to achieve his goal. And besides. Once the monster created, he tries to kill him. A double crime after all.
    So Frankenstein is probably not the best example you have taken. There it is more a blame for human nature and moral rather than a blame for technology. Maybe it would be better to choose Dr Jekyll & Mr. Hide as an example. There technology is more to blame for freeing the monster.

    Meanwhile you talk about the future of technology and boards, public discussion, commitees. Do you know what they mean? Inquisition, the mobs and nothing less than the corporations. This last one in a much broader sense than the "typical modern" corporation. Do you remember Galilei, Bruno? Do you remember what happened to the first car? Do you remember about the masons of Middle Age and their construction skills?

    The Judaic/Christian/Muslim tradition states that God said: "You shan't kill...". Well that's the point. You may have your bare hand, a stick, a pistol or a nuke. But, "you shan't kill". Correct, it is hard in our marvellous world to follow such a rule. I know how damn hard it is, specially when one tries to kill you. But the problem is on the human values you hold and how you apply them. I don't go trough the streets and shot everyone I see. But I can't stand out of using violent measures to avoid an uncontrollable hooligan trying to cut my neck.

    This same point goes to the human genome. If we care and do care for our future then we should forbid the uncontrollable use of genetics in humans. But if we care to solve serious problems of health like hereditary defects then we should modify genes. Other way does not exist.

    And anyway. Imagine that humans have some %%%% of artificiality. Anyway they do have. Even from the selectivism of our nature. So what makes modifying genes a big difference?
  • Oh right. Ayn Rand as guardian of our ethics.

    Certainly beats you.

    "Perhaps" we should let the executives at Firestone ("carnage on the freeways") Inc.

    What you're conveniently forgetting is that it wasn't a government agency or some Naderesque "consumer advocate group" (read: "bloodsucking lawyers") that found out about the problem. It was State Farm Insurance, presumably run by the same sort of "greedheads" that you despise. In fact, they informed the NHTSA two years ago.

    or Union Carbide ("the dead of Bhopal") Inc. make these kinds of moral and ethical decisions for us.

    And what you're also (again quite conveniently) omitting here, is that what happened in Bhopal in 1984 was not the result of any decisions made by "greedy" Union Carbide executives, but was caused by one of the workers sabotaging equipment at the plant. What's the glowing term you leftists use for that? "Direct industrial action", I believe?

    WHO DECIDES? Certainly we don't want the greedheads making those decisions, unless you are someone like Dragoness here, who serves as sycophant to the rich.

    I'll trust a businessman greedy for money rather than a politician greedy for the power to run other peoples' lives, any day. No contest.

    How about we have a little test. You total up all the people rounded up and shot/gassed/etc. by businesses during the past century, and I'll add up the number of similar folks treated thusly by governments, and we'll see who has a higher total. Better hurry: Hitler, Stalin, and Mao put the governments' "score" at around 60 million right off the bat.

    Or instead, how about this: call up Bill Gates and tell him you use Linux. Then call up Janet Reno and tell her you use heroin. Let us know which call results in armed men kicking down your door.


  • Er, no. Think of electricity and the internal combustion engine? Electricity is used extensively for torture, motors drive tanks. And first-world countries would starve, freeze and die without both of those. Just as a couple of examples.

    If you work in the defence industry, or with military funding, then maybe you might want to look at the uses of stuff. I hope the guy who invented napalm has that on his conscience for ever - if there's no obvious other use of your stuff except to kill painfully, then that's not something I could do. But if you had to second-guess everything, there'd be nothing left.

    And think on: mil-tech isn't necessarily evil. GPS and the Internet are both military projects; GPS is still funded exclusively by the US military. The jet technology developed for fighters in the 40s and 50s drives the jumbo jet that takes you on holiday.

    And there's other stuff too - how's about medicine? The range of "truth drugs" out there have medical uses, and their "truth drug" properties are an accidental side-effect. The original, scopolamine, is a sedative and is also used in small quantities to combat motion sickness.

    What's likely to trip us is something we'd never think of. In films, think T2 - a computer becoming sentient and enslaving mankind is not something a geek thinks about when he's working on his latest chip design ("It's not every day you learn you're responsible for the deaths of 6 million people.") Robert Heinlein once wrote, "In the early 1900s, most futurists agreed the car would serve a purpose. Some saw that it would replace the horse. But none of them foresaw the change in mating habits of the American teenager which it caused." Would you have predicted 20 years back that the Internet would reach such a mainstream audience, given the average population of BBSes and their speed and reliability (or lack of)?

    Anyone who claims to be able to spot all future uses of something is lying - it just isn't possible. Futurists are no more accurate than weather forecasters - once something's happening, they may (if they're lucky) be able to tell you with a reasonable chance of success which direction it's going to go in and how fast, but there's no way to predict anything new starting, and there's no way of knowing whether the butterfly you've just seen flap its wings is going to cause the next cyclone.

    Grab.

    Grab.
  • The UK recognizes that Sealand does not fall under their jurisdiction, just as any boat in international waters (and not flying the UK flag) does not.

    This is not the same as "the UK recognizes Sealand as a sovereign nation."

Your own mileage may vary.

Working...