“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. . . . what the O.C. Bible should’ve said is: ‘Thou shalt not make a machine to counterfeit a human mind.” Frank Herbert, The Illustrated Dune (New York: Berkley Windhover, 1978), p. 12.
By The Honorable John McClellan Marshall
Senior Judge, Fourteenth Judicial District of Texas
Honorary Professor of the University, UMCS, Lublin, Poland
Member, International Academy of Astronautics
In 1851, Herman Melville published his most famous novel, Moby Dick. It is the story of the obsessive pursuit of a white whale, Moby Dick, by a ship captain that ultimately results in the destruction of the ship, the captain, and all aboard but one who lived to tell the tale. In many respects, the book is a description of what happens when human beings lose sight of their humanity or, more simply, surrender it to something else. At the heart of the human condition is the ability of one individual to communicate with, and relate to, another hopefully for the common good. It is when that relationship breaks down that humanity as a whole gets into various forms of trouble.
At their most basic, relationships between human beings for millennia were, and to some extent still are, characterized by a man-man interface. This is the result of one person interacting directly with another for the accomplishment of a particular task. This may expand to become a “team” effort to achieve a particular goal, or it may be a matter of dispute resolution. For example, the ignition of a fire to warm oneself and one’s family is an individual effort, while the hunt for a wooly mammoth became a “team” effort. In other words, the line between individual and team effort was heavily dependent upon the task at hand. The point is that, then and now, it is dealt with in strictly human terms in the context of the society in which the event takes place. As time went on, it became clear that more and more tasks would require a “team” effort, and the need for the individual “genius” tended to blend into the background. Of course, there were some “geniuses”, such as Archimedes, whose pioneering work laid the foundations of much of modern physical science. Also, Iktinos and Callicrates, co- architects of the Parthenon, created much of the principles of architecture that continue to the modern day. It is in this context that most of what would be considered “societal norms” arose, starting in the West with Socrates and being elaborated over the next 2000 years. In the Fourth Century, B.C., Plato noted that Socrates was fond of beginning many of his dialogues with the phrase “Know thyself”, the clearest statement of the importance of the individual as the foundation of society. That statement in effect prefaced that what was going to happen next would be in its very essence both personal and human. It set the tone for what was to happen in much of Western civilization for the next two thousand years.
From the later centuries B.C. to the 18th Century, much of social, cultural, and commercial interaction could be described as stemming from a man-man interface enhanced by “occasional” technology created to meet a specific need. The shift began as humans or groups of humans perceived social or economic needs that were beyond mere survival. Key to understanding this shift is the realization that society moved steadily from an emphasis on the individual toward the group as a mechanism for the accomplishment of societal goals. This manifested itself in the development of tools that assisted in the accomplishment of these objectives.
The simple spear used to kill the mammoth for food led to the ballista as a means to attack or defend a group of fellow human beings. It became a major step forward in the technological evolution of civilization when gunpowder was invented. A cannon, after all, did not particularly care why it was being fired, but the damage that it could do was exponentially greater than a ballista. The point is that, during the period of technological elaboration up to the 18th Century, the ability of individual humans to control the consequences of the technology was diminished almost imperceptibly, but steadily. Few noticed, in part because much of what was developed was adapted to the changing landscape of human society. In commercial exchange and in communication, the development of wagons and later chariots enhanced the ability of far- flung groups to engage in potentially positive communication and commerce. For example, the gauge that is the measure of the distance between the rails of modern rail systems, particularly in Western Europe, is within a few millimeters of the width of common Roman chariot wheels.
Certainly, it is true that in the West during the centuries preceding the Industrial Age, there were still individual “geniuses” the breadth of whose work impacted society at large, both in their time and ours, such as Leonardo da Vinci, Copernicus, Galileo, and Newton. Although these were individuals, their work often was aimed at a wider audience than a few select readers. It is, however, in the collective impact of their work that their true importance for society lies.
It was with the beginning of the Industrial Age that the social dynamic began to shift noticeably away, indeed to some extent deteriorate, from man-man toward a man-machine interface. This initially was the result of the creation of machines intended to perform mundane tasks formerly done by human beings. This arose either for reasons of convenience, such as making more of a product for an expanding marketplace, or in some cases for safety in the performance of a given task, such as mining. The key feature, however, was that even in the man-machine interface environment, man in the affirmative exercise of his mental resources retained some vestiges of human control of the mechanical environment, though that control tended to be diminished in direct relation to the complexity of the technological response required by the problem at hand. The process, while gradual, was also characterized by a certain tension between the traditional view of individuality and the “group.”
Technology in such a circumstance clearly was meant to be a tool to implement these values and was unquestionably under the control of human beings. Whether it was a sword or a printing press, the human element initially had to be present in order for the technology to function. For that reason, ethical behavior was never not present in this system.
As time has passed, however, there has been a shift, both in terms of society and technology, with the result that both chronologically and philosophically, the issue for society now is one of cyberethics: the relationship between the ethical and legal systems that have been developed to serve humanity from ancient times to the present as expressed in our philosophical and ethical systems of thought and the judicial process as contrasted with the ability of computer-driven technology to operate outside those conventions with almost no limits. 
During the Industrial Revolution of the 19th Century, the socio-economic structure of society underwent a transition under the influence of what can be termed a man-machine interface in which technology became an even more elaborate extension of man in his activities. The impact of this shift reached a high point in the development of the assembly line for automobiles and other machines, and post-World War II Levittowns to house the workers who serviced the assembly lines. Obviously, the factory system and the automobile were major economic extensions of human beings in terms of enabling them to perform tasks and to extend themselves well beyond what had hitherto been possible. An apocalyptic view of this relationship was presented in the visionary 1927 motion picture Metropolis, in its own way something of a horror story, yet fable for the modern time. This process, however, accelerated dramatically with the Cold War and the political and economic demands for technological superiority over potential threats to global survival.
The advent of the Space Age fostered an exponential shift in the fundamentals of social interaction that became reflected in the reaction to that shift of the ethical system that had, until then, been at the base of Western culture. In other words, people had related to each other primarily in a face-to-face mode, whether it be in shopping, voting, or even in wartime. Ethical and value systems, reflected this relationship as society developed concepts of “right”, “wrong”, “good”, “evil”, and then refined them based upon the common experience of mankind. This is sometimes referred to as “axiology”. The term was initially propounded by Lapie.  Later, it was expanded upon by Eduard von Hartmann, such that it embraced both ethics and æsthetics.  In the discussion of axiology, the ethical component investigates the concept of “right” and “good”, while the æsthetic deals with somewhat more subjective issues of “beauty” and “harmony”. Obviously, whether discussing axiology in terms of ethics or æsthetics, the philosophical roots extend back to Socrates and Plato. To this extent, an axiological approach to the discussion of the interface of humankind with technology necessarily brings the philosophical world directly into what can be termed the “real world” of human day-to-day existence. The importance of the relationship of the individual to the group is brought into clearer focus.
It is in this context that the Space Age wrought its magic on the society that existed in the middle of the 20th Century and without much fanfare created the machine- machine interface. In this environment the requirement for human action was steadily reduced in direct proportion to the ability of machines to communicate with each other and accomplish tasks increased. With the diminution of the role of individual humanity, the ethical and value systems that had hitherto defined human society likewise appeared to be diminished. Humanity had driven the functioning of the system, and control over the environment has been steadily replaced by machine logic. As a result, the system that we have known as “Western civilization” may have been deprived of one of the bases of its validity and may well need redefinition.
During the six decades since NASA announced the Apollo Program, society as a whole has tended to focus on the expansion and development of new and exciting technologies. Initially in America, of course, these were primarily oriented toward the goal of landing on the Moon by the end of the decade of the 1960’s. The mechanisms in place to accomplish that daunting task were, by modern standards, quite primitive, ranging from the mechanical sequencers used in the earliest pre-Saturn launches to the “ropes” used in the landing computer aboard the Lunar Module itself. As demonstrated by the Apollo 11 mission during which the final approach was handled by Neil Armstrong personally, the man-machine interface with the human as a “fail safe” factor was still dominant even at that point in the space program, and man still was in control of the mission. Yet, even with its relatively primitive technological beginnings, there can be little doubt that the space program had an immediate and direct economic, philosophical, and psychological impact on society not just in America, but throughout the world. At its heart was the notion that “man is now a spacefaring creature.”
Nowhere was this more apparent than in the physical structures of the Apollo/Saturn V itself while it sat in the Vehicle Assembly Building at the Kennedy Space Center. It was not unusual during the assembly phase of these space vehicles for some of the workers actually to have their daily work areas (including telephones and desks) located within the vehicle in the areas between the stages. Similarly, many of the workers wrote their names on the inside walls of the stages prior to the final transportation of the space vehicle to the launch pad. It was referred to as “the bird” or “she,” like a large sailing vessel, in clearly anthropomorphic terms. Indeed, this terminology was itself an indication of the blurring of the man-machine interface by the vision of the future. The sheer physical magnitude of those vehicles was such that one was easily in awe of them as a matter of course, and they never were “just another machine” even to their designers.
This apparent transference or projection to the inanimate of characteristics that properly belong to the animate was neither deliberate nor conscious on the part of those engaged in the elaboration of the technology of the space program. At the same time, the perspective on technological advancement was undergoing a discernible shift away from human control to a presumption that the machine can “do it better.” It may well have been the product of an intellectual focus derived from a combination of the psychological pressure to achieve the Apollo goals and the challenge of resolving the many technical problems associated with that achievement. To that extent, the growth in the financial support of, and academic emphasis upon, science and engineering in the curricula of virtually all major universities throughout the world was a faithful reflection of that ethic. To some extent, this quest for technological excellence at the cost of human values became a sort of cultural “white whale” in which the latest innovation was its own justification for its existence.
One of the more mundane spinoffs of the space program, the use of Teflon® as a lubricating surface for the cryogenic valves in the Saturn V, spawned an entire household appliance industry. Everyone who has a pacemaker implanted in his or her chest owes the regularity of the heartbeat to Apollo. Of course, at a more subtle level, the direct needs for faster and more dependable computer systems fed the development first of solid-state technology and later of printed circuits as the principal means to assert control over the operation of an increasingly complex spacecraft. The list could go on and on, but what is significant is that during this time the way that humans looked at technology changed, slowly to be sure, but inexorably nonetheless.
In the liberal arts that normally required proficiency in foreign languages for advanced degrees, such as history or sociology, it has become possible at some universities to substitute a course in “computer science” or some specialized mathematics such as statistics for one of the foreign languages formerly required. This was initially justified in part on the premise that some of the liberal arts now included statistical data as well as literary components in order to make them comprehensive in the treatment of their topics. Unintentionally, this may have resulted in a narrowing of the focus of graduate level studies in some of the liberal arts to those areas that could be so quantified and the literature of which was most often in the native language of the student. The breadth of viewpoint expected of a doctoral candidate in such a liberal arts field would tend to have narrowed to the point of exclusion of information, rather than expanded in the inclusion of information, on the topic.
That the very existence of the technology of the internet providing access to an enormous body of information that may well now be beyond the ability of the scholar to examine in a meaningful way is beside the point, except to note that the internet became another resource to be consulted. Academic research in recent years has been influenced dramatically by the ability of the personal computer to allow the student to search the internet for information. Unfortunately, as has been noted elsewhere, the problem of “Big Data” clouds the ability of the individual researcher to remain focused. One estimate is that approximately 16.3 zettabytes of information, roughly the equivalent of 16.3 trillion gigabytes, is being produced each year. By 2025, this number should increase ten times.  The practical corollary to this problem is how to store this material for future use in the face of digital storage programs that change every few years? At the annual meeting of the American Association for the Advancement of Science in 2015, the vice president of Google, Vinton Cerf, spoke in favor of the creation of “digital vellum.” By this, he advocated a system that would be capable of preserving the meaning of the digital records that we create and make them retrievable over periods of hundreds or thousands of years.
The line between human research guided by critical thinking and that conducted by the machine which only disseminates what it has available to it has become blurred to the vanishing point as universities grapple with the issue of “whose work is it?” Pocketsize computers contained in cell phones more powerful than the ones that flew Apollo to the Moon and back have all but eliminated the need for students to learn basic arithmetical skills. What was obtained from the internet has tended to become “valid” data for inclusion in scholarly work without the benefit of critical thinking or analysis beyond “it must be true or it would not be on the internet.” As a consequence, one may legitimately question how much of the so-called “scholarship” of the past twenty years is the product of such “predigested” research, however refined the search might have been, and thus whether such “scholarship” is truly the product of the purported author. A similar issue arises when multiple researchers, acting in concert at a university or in consultation with each other over great distances, produce a finished book or other presentation of their work. The question then is clearly, “Whose work is it?”
The corollary to this question is whether in the focused pursuit of the “white whale” of technological perfection, the human aspect of that pursuit has become submerged. Put another way, is the price that society must pay for the “white whale” the sacrifice of encouragement of individual “genius” and achievement in favor of a sort of Orwellian “groupthink” that eliminates individuality and the rewards that go with superior performance in any given field?
This problem has its reflection in the legal issues that surround the international copyright and trademark conventions that attempt to restrict the downloading of materials from the internet without proper attribution, often leading to plagiarism. Lawsuits relating to music distribution rights on the internet or even the opening notes of a song provide the easiest example, but there are many others. It is not that the legal system is attempting to stifle legitimate research through the internet. Rather, it is that the traditional legal system is grappling with the nontraditional ability of technology to cross national boundaries leaving no trace and, hence, depriving the creator of the material of the fruits of his or her creativity. Admittedly, this may seem to be a rather quaint holdover from free enterprise capitalism, but it does seem to be damaging to a part of the engine that drives the changes that are being experienced in technology.
On a social level, the expansion of the scope of the man-machine relationship and its consequent dilution of the exercise of human mental faculties may have led to a more subtle and damaging series of phenomena. With the emphasis on speed, the pressure to “produce” and to succeed even at the elementary school level, technology presents an unprecedented opportunity for humanity to get lost in the complexities of the machines that facilitate that speed and success. Most persons under the age of 40 in the West and in the technologically advanced areas of the East have grown up in a world that does not know a world without personal computers. The advent of such technologies makes it easy for human beings to compartmentalize their existence and, thus, dissociate from their fellows. This dissociation phenomenon ironically makes the actions of the group seen as more significant than those of the individual. This can creates a subtle isolation of individual human initiative as giving rise to “elitism” on a social level that should be discouraged. The shift of focus away from people to machines thus becomes the modern analogy of the view that Captain Ahab held of Moby Dick at the start of the book. The pursuit of the “white whale” of technological excellence mimics the voyage of the Pequod, a voyage on which humanity now finds itself. As Tacitus once said, “Because they didn’t know better, they called it ‘civilization,’ when it was part of their slavery [idque apud imperitos humanitas vocabatur, cum pars servitutis esset].” 
As an example, in essence, an entire field of study, that of mathematics as a numerical component of philosophy, has been defaulted to machines that we now are accustomed to thinking should be intrinsically trustworthy. For human beings to have been thus subordinated to machines is a suspect, if not outright dangerous, concept. Further, many students well may not know the basics of research in a conventional library; indeed, one could arguably write an entire thesis on the history of the Reformation without ever seeing an original manuscript by Luther. Yet, their ability to communicate with a machine is unparalleled as compared with the experience of their parents.
In an uncomfortable glimpse of family reality, in many cases children in the modern day prefer the company of the machine to that of their siblings or parents. After all, it does not tell them to pick up their clothes or what time to be home in the evening. The example of a preparatory school student who, during his Christmas holiday, spent fifteen days playing the latest computer game and eating meals at his “playstation” is representative of the problem. Such detachment from the rest of his family and friends, while extreme in its appearance to many observers, likely is not that unusual in many younger people and students, as indicated in the steady increases in sales figures related to computer games during the past five years. The parental allowance of this dissociative behavior should not be ignored.
Ray Bradbury, in his 1964 short story The Pedestrian, presented a portrait of an urbanized society in which a pedestrian walked down the street in the evening. The sidewalks were in disrepair from non-use, and he meets no one else during his tour of the area. In the homes, he saw only the glow of the television screen. The pedestrian who is merely enjoying an evening stroll was clearly a misfit in that society. Such a vision was prophetic indeed, but it may not be so much television as it is the computer monitor, the iPad, or the cell phone that is glowing in modern times.
It may well be that this behavior pattern is symptomatic of a breakdown in the “connectedness” of society generally that reflects the ability of a growing number of human beings to relate to a machine more easily than to their fellow humans. Some technologies already in existence store information provided by a group of individuals. By giving access to all of that information to each member of the group, the impression of “connectedness” is created. Unfortunately, there is no effective method for securing the integrity of the information, nor permitting critical analysis of what it tells about the person who provided the information.
The irony of this situation is that the technology of the internet itself purports to promote “connectedness” as to everyone at the other end of the machine [an entire world], but not necessarily to the person standing next to the machine. Certainly, the advent of social media has facilitated an illusion of “connectedness”. The question remains as to a gap between reality and that illusion and the impact of that gap on the ethical and values of the society that does exist. An analog to this is the often heard statement that “if everyone is responsible for what happens, then no one is responsible” whether for good or ill outcomes.
This modern breakdown in the man- man interface also has implications in terms of the transmission of culture from one generation to the next. In earlier times, it was customary for parents and grandparents in the context of the extended family to be the vehicles by which social and cultural norms of behavior were transmitted to children and grandchildren. If in the present day these norms should not be passed down from the parent to the child due to the lack of parent- child interaction, then from whence is the child of today obtaining the value system that will be transmitted to the next generation? The question immediately presents itself, “What will the society look like if people begin to bond more readily to machines than to people?” By extension, the revision and modification of “history” is made simpler if the reality of what our grandparents, knew disappears because of the ability of technology to uproot that reality.
In another aspect of the modern world, it is often said that the legal system derives its legitimacy from a faithful reflection of the society that it is designed to serve. What should be kept in mind is that, in the final analysis, the judicial process is a uniquely human institution designed to serve human needs, and anything that interferes with that objective should be viewed with a healthy skepticism. This is, in part, the reason that technologically-based evidence until recently has had such a difficult time being established in court.
An increasingly frequent type of such evidence is generally referred to as “electronic evidence.” This is evidence that is either electronically generated, such as computer printouts, or is electronically stored in some fashion, such as emails. In the case of the electronically generated records, the juridical issue is the method by which they were created and whether they might have been tampered with. The problem arises in the discovery of the existence of these materials and where they might be stored. For example, in a case involving a person who was run over by a city bus, the issue was the maintenance record of the bus. The printout that was produced had many pages, but there were lines that were skipped, revealing tampering with the record. That case settled without a trial. As to an electronic document that is stored, a bank record is an example. In one case, a bank was suing a depositor to recover $200 that had been taken through an ATM transaction. The evidence offered by the bank was a computer printout created by the ATM that showed that it had given the person $200. The defense was that the machine was lying and no money had in fact been dispensed. The inability of the defense to cross-examine the machine led to the case being dismissed. In such situations, the human factor retains some semblance of ascendancy over the “infallibility” of the machine.
An emerging problem with electronic evidence is that the machine itself may actually “hallucinate” in the course of preparing its data. If, for example, the machine is measuring temperatures or pressures to explain events that created the legal problem, there could be external factors at work. If the machine is exposed to a power surge or excessive heat or cold, it can affect both the measuring instruments and the operation of the machine on those measurements, which may be in error. In short, one of the problems with excessive reliance on the infallibility of machines is that humans, on their best day, may not always know what the machine is thinking.
One of the most frequently cited benefits of the modern technology has been the introduction of what might be termed “technoevidence” into the judicial process. Technoevidence can be defined simply as “that information that would not be available to the trier of fact (whether judge or jury) no matter how smart the investigator, in the absence of modern technology”. Put another way, if it should be evidence that cannot be deduced by the employment of sheer brain power by a human being, it may be deemed to be admissible in court because of its seemingly intrinsic trustworthiness. To that extent, the potential “enslavement” of the judicial process to technological capability is something to which lawyers and judges need to be sensitive.
Recently, the advent of the “self- driving automobile” has raised serious questions, both legally and philosophically, as to whether this is an idea that has the potential to benefit mankind. The number of accidents that have resulted in injury or death of the occupants of these vehicles has brought into focus the question of the limitations of the machine. A growing dependence upon the machine to perform in an emergency “better than” or “faster than” a human is a questionable concept at best. Even when the “self-driving” software is combined with vehicle sensor braking systems, there is the multiplied potential for a breakdown. To that extent, the human operator is hostage to the designer of the “self-driving” software-hardware interface. From the larger perspective, that is the societal consequences of such malfunctioning vehicles, the legal question of “whose fault is it?” presents an entirely new set of issues that would have to be addressed with the aid of technoevidence. Who is responsible, keeping in mind that the underlying Western philosophy of the law focuses on individual, not collective, responsibility?
In the United States, the proffered evidence must be examined by the judge, both as to methodology and as to the qualifications of the tester before it can be considered.  This “gatekeeper” function vested in the judiciary places the judge in a rather interesting position in relation to the scientist. After all, in most instances, the judge is the product of an undergraduate liberal arts educational background that is reinforced in law school by virtue of the extensive reading that is customarily required. When confronted with evidence grounded in the sciences, the judge must now evaluate on a scientific level the quality of the information. It is ironic that in this instance the humanities remain somewhat in control of the sciences, because in many cases the judge simply applies common sense to the decision process, allowing the evidence if “it will assist the trier of fact.” Generally speaking, however, the judicial process has reacted to the shift in cultural norms just as have other institutions in human society.
In criminal cases, particularly, the use of DNA (Deoxyribonucleic Acid) to determine the identity of a criminal perpetrator or a victim of a crime has been expanding and has had quite remarkable results. Prisoners unjustly accused have been freed, and the guilty have been apprehended, sometimes after many years of fruitless investigation and pursuit. The positive aspects of these results cannot be overemphasized.
Similarly, the utility of this type of evidence in cases involving children should not be overlooked, either. The question of paternity is most often resolved now by DNA testing, with the consequence that a person may or may not be found liable for child support payments based upon the outcome of the test. Characteristically, the test results will reflect a 99.9998% probability of paternity or a 50.0000% probability of non- paternity. From the point of view of the pragmatic social impact of this technology, the fact that children are being financially supported by parents, even absentee parents, is a positive benefit to the community that no longer has to support them.
More recently, DNA sampling has played a significant role in the identification of victims of the South Asian Tsunami in December 2004. By contrast, the limits of DNA sampling are reflected in the termination of the forensic examination of remains from the WTC disaster with the statement that over half of the victims “will never be identified.” That even this level of success would not have been possible without the technologies of the past five decades is beyond dispute.
Although much has been written about electronic evidence and how it should be presented, little attention has been paid to its impact on a jury or trial court as a factual issue. In recent years, the focus of attention has been on the recovery of information from computer hard drives that supposedly were cleaned of any incriminating materials. The technology to recover these “overwritten” files is sophisticated and, generally speaking, easy to operate. Firewalls and encryption software are of little value when the hard drive is being examined in this context.
The problem from the point of view of the justice system is not that these technologies are not reliable. On the contrary, it is their very precision that is the issue from the point of view of the court. In fact, almost never is the test procedure or the result the subject of a challenge in court, as typically would be the case for other scientific evidence. The fact that the test and results can be manipulated or are sometimes the subject of human error in the testing process does not detract from their fundamental utility. It is rather that they have been invested by their human operators and those who receive information from them with a possibly undeserved level of infallibility that tends to make the introduction of “technoevidence” conclusive on the subject at hand and effectively forecloses any further discussion.
The potential for manipulation of the testing process and the results that are achieved is both real and ongoing. This is particularly true when statistical models, backed by technology, are used to support a particular hypothesis. For example, in a case that involved a massive injury to a truck driver, his lawyers presented as a witness an economics professor whose specialty was statistical analysis. The evidence that was offered was that, if his pain and suffering were valued at $.05 per minute for the rest of his life, the dollars that his pain was worth would be astronomical. The problem, obviously, was that there cannot logically be a one-to-one correlation between pain and money. This was an example of the “Sam Rule”: Statistics don’t mean a damn to the man who is struck by lightning.  Such testimony clearly had been driven to a particular outcome, not “pure” science, and, predictably, the jury ignored the professor.
When confronted with a complex concept such as intoxication, for example, sometimes a simple, graphic experiment in the courtroom can transform that concept into a reality just as if there were serious technology involved. For example, in a case in which the amount of alcohol consumed by the injured plaintiff was an issue, the defense lawyer came up with a practical demonstration. Rather than employ the breathalyzer data or blood text, he point out to the jury that the plaintiff had consumed six cans of beer in about an hour and a half prior to the accident. With the permission of the court, of course, he brought in six cans of beer, opened them, and poured them into a large, clear container that then sat in front of the jury for the next two days of trial. The smell of the beer completely overshadowed the remainder of the plaintiff’s case, with the result that the jury did not award damages. 
Such demonstrative presentations, while not heavily technological in their content, undoubtedly can assist the jury, or the judge, in understanding complex concepts. Obviously, technology, in the form of VR or even holographic reconstructions of accidents or other events, could be outcome determinative in the modern trial environment. In such a context, it would be essential to remember that VR, “virtual reality”, is exactly that: it is artificial and is, therefore, at best an approximation of reality. As such, it would be limited by the structure of the programming of the VR device, a clearly human endeavor. From the perspective of cyberethics, it is clear that VR may well be the electronic analog to the difference between the shadows on the wall of Plato’s cave and the reality of what makes the shadow.
Similarly, although holographic representations of rock stars such as Whitney Houston, Elvis Presley, or Michael Jackson may be great entertainment, there is a cyberethical component that must be considered in the context of a legal demonstration. While certainly, the holographic presentation of a complex surgical procedure, for example, might be very useful in assisting a jury in understanding what took place and in reaching a verdict. At the same time, the preparation of such evidence must be as rigorous as the human record of the surgery and technology allow. This is especially true in a novel fact situation or one where expert testimony is normally required. Obviously, the utility of the finished product must be of demonstrable assistance to the court, or it simply would not be allowed.
This type of artificial intelligence (AI)-generated synthetic video, text, or audio demonstrative evidence, however, is also subject to deliberate manipulation. The more sophisticated ones are known generally as “deepfakes”, and the amateurish ones are “cheapfakes”. The key concept though is “- fake”. The problem for the courts in such a presentation is how to determine the fundamental integrity of the presentation that is being offered. With the technical advances in the production of deepfakes, this will become increasingly difficult, though it could produce an entirely new industry for experts who can uncover the fakes and validate the genuine. Juridically, this would be akin to determining genuineness of an alleged forged signature, but should not the judges need to be aware of the issue, lest technology erode confidence in the integrity of the judicial process? 
By way of contrast, in the modern day, the social and political reaction to the COVID-19 virus provides an example of the inability of science, even with the aid of technology, to define accurately the boundaries of an issue. Of course, this pandemic environment has many novel and unprecedented aspects to it, but it is the attempt at quantification of them that has presented the problem most clearly. At the outset, statisticians applied traditional models, in some cases with flawed input as to numbers, to what turned out to be a very un-traditional phenomenon. The analytical results were, as a result, wildly inaccurate when the true numbers appeared.
Unfortunately, there was a lag time between the creation of the initial modeling and the emergence of the reality. This lag, combined with the apparent inflexibility of the model technology to adapt to the changing numbers and conditions, influenced lay politicians, led by what they believed to be good science, to make decisions that now appear to have had unintended and very negative societal and economic consequences. This is not to suggest that these decisions were a product of “junk science” or intentional misrepresentation of the “facts” by scientists. Rather, it is to suggest that scientific inertia in the analytical process did not allow the technology that was available to be of material assistance in correcting the analyses in real time. One very unfortunate unintended consequence of this situation has been a noticeable decline in public confidence in the “science” that drove the political decisions.
The point, quite simply, is that the importation of technoevidence should be done with a critical eye to the methodology and source materials in order to establish a proper foundation for the validation of it. This would apply equally to into the very human judicial process, as well as politics. If the situation should be unprecedented, then traditional models based on technology that cannot change with the times will, by definition, be of little, if any, utility. This opens up the opportunity for technology to evolve in its ability to assist in the creation of new models.
For example, at the trial court level, such validation has come from studies that have revealed the relationship between the technology and the media, sometimes described as “The CSI Effect”. This manifests itself in cases where, in the absence of DNA evidence to link a defendant to a criminal act, the defendant goes free. This is because the jurors have become accustomed through television programming to see a case as doubtful without the DNA evidence. Of course, the proper predicate must be in place to allow the DNA evidence to be presented, but even then, the jury may or may not consider it.
When applied in the social context of the judicial process, such an analytical approach takes the court one step beyond machine-machine and borders on giving technoevidence a machine-man character in its impact on the finder of fact. The legal system then proceeds in many instances in a rather mechanical fashion. For all practical purposes the judicial inquiry [and, therefore, judicial discretion] ends there. This is a clear representation of the next stage in the evolution of the cyberethical relationship, that of machine-man interface.
In the world at large, there is an almost imperceptible, yet undeniable, shift to a machine-man interface that reflects the willingness to defer to the action of a machine programmed to perform increasingly complex tasks formerly done by a human being. As human beings more and more depend upon [read: “defer to”] the ability of their creations to relieve them of responsibility for decisions, man has tended to become the extension of the machine that he created, rather than has been the case until now. Similarly, in this paradigm the machine, having been once set in motion, exercises its potential for the judicial process to determine mindlessly the destiny of the human beings before it that is its focus. Perceptually, this has created a blind spot in the ability of human beings to discern what limitations actually exist in the machines. Put another way, “[People] will believe anything if it is in the computer.”  It is perhaps at this point, where the quest for the “white whale” encounters the “black swan”. 
Reliance on standard forecasting tools can both fail to predict and potentially increase vulnerability to black swan events, such as COVID-19, by propagating risk and offering false security. Some of these tools, such as PERT or CPM can, with adequate data input, be flexible enough to avoid the catastrophic events that tend to follow the black swan. For example, in the Apollo Program, the planning system led to what was called a “3 Sigma (3∑) design” plan. The engineers and technicians realized that a complete system failure that would lead to the death of the crew was possible. They then worked backward to design the systems so that the probability of that happening was practically zero. This system did not, however, merely look at the end of the mission, i.e. landing on the Moon, as the ultimate failure point. In fact, the engineers examined several discrete intermediate points at which, if a total failure occurred, it would result in “mission loss” (the euphemism for “everything went wrong, the vehicle was lost, and the crew perished”). An on-pad explosion of a fully-fueled Apollo/Saturn V space vehicle, with its resulting 3000ºF fireball of about a kilometer diameter, consuming the entire vehicle, ground support equipment, and spacecraft, was considered such an event. Short of this was “crew loss”, a situation in which the crew perished, but most of the mission objectives were achieved. It was this level of loss that nearly happened on Apollo 13, but the fact that it did not was in part the result of the multiple levels of 3∑ design planning. The practical outcome was that the Apollo/SaturnV space vehicle, in many of its systems, had double redundancy.
Unfortunately, it would seem that the black swan scenario in modern response planning does not acknowledge the catastrophic event as the starting point, so planning for it as a way of avoiding it simply does not happen. This fosters a very negative view of the world in general and potentially reduces humans to pawns in a game the rules of which are completely unknown at the time the catastrophe occurs.
An illustration of the “black swan” effect in the transition from the man-machine interface to the machine-machine interface and then to the machine-man interface in the context of the space program is nowhere more clearly presented than in the tragedy surrounding the Columbia disaster of February 2003. There can be little doubt that the fact that the on-board computer was in control of the shuttle to the exclusion of the human crew during reentry (machine- machine-no man) deprived the crew of any opportunity to affect the sequence of events that led to the catastrophic breakup of the spacecraft.  Whether anything the astronauts could have done would have averted the disaster is, of course, speculation. That the presence of the machine-machine-no man interface preempted their ability to exert any control over the reentry attitude of the vehicle is undeniable. It is perhaps this unquestioning confidence in machines that has been placed by our society that is a component of the disaster as well.
The investigation that followed the Columbia disaster revealed a shift in the attitude of people toward the function of the machine vis-à-vis the system that it is designed to serve. Formerly, when a machine malfunctioned, the problem was characteristically handled by a man-man interface between the operator or other appropriate person and the person who needed to have the machine work properly. The person responsible for the operation of the machine would simply resolve the problem for the person who had been “victimized” by the machine by later initiating appropriate inputs into the machine to “make it right.” The focus was on the resolution of the human problem that had been created by the machine. In marketing terms, “the customer was right.”
There is little question that the accuracy of machines in the performance of their assigned tasks has improved dramatically during the past half century. Both in terms of design and hardware technology, their reliability has increased exponentially, and as a result, society has tended to view the machines as not subject to serious challenge. As a response to the increased trustworthiness placed in machines by society in general, the process of defining the relationship between the needs of humans and the ability of machines to satisfy those needs has changed as well. The focus is not necessarily on the resolution of the systemically created human issue; rather, it is on how to fix the machine to make it perform the way that the system demands. Put another way, if the system of which the machine is a part breaks down, the emphasis is on fixing the machine, not on the examination of the human impact of a possible system design failure. In the case of Columbia, the report did point out that “organizational cause factors” contributed to the disaster. Instead, the principal focus was on the repair or redesign of the hardware systems and a return to manned space flight as soon as possible “consistent with the overriding objective of safety.”  In other words, “do not look at possible future events, but simply fix the machine and work on the system as it goes forward.” This reflects the post-catastrophe characteristic of the “black swan” scenario in explaining the event as “a mistake.”
That viewpoint was reflected in the determination that flights of the shuttle would resume in May 2005 after rigorous testing of the spacecraft systems by computer simulation. There was little indication that there had been any additional or newly designed real-time testing of the hardware. What had been done reflected an attitude that the machine is capable of testing the machine in a virtual manner without regard to the physical realities of the hardware. This is a philosophy ripe with disaster, because it presumes that the computer that is conducting the test is capable of detecting all of the potential flaws that exist in the real hardware. This represents the “blind spot” in the post-Columbia thinking among the engineers. In fact, of course, the computer will only detect those flaws that have been programmed into the simulated hardware that it is examining. In this thought process, it is the pursuit to attempt the capture of the “white whale” that is important, not any concern for the “black swan” of disaster.
This presents the next challenge that human society must confront in its assessment of the cyberethical problem: what will be the mind-set of a machine that supposedly is programmed to “respond like a human” and “care”? The problem clearly is one of design concept in that human society is attempting to create a machine that will be sensitive to, and thus responsive to, human needs while at the same time society itself may be in the process of “disconnecting” its members. Indeed, it may be the readiness of our society to allow the application of such mechanical logic derived from the machine, not to say “laziness”, in the drive to achieve a given objective that may be the key to understanding the subtle danger posed to our value system by modern technology. To that extent, as Anaïs Nin pointed out, “We do not see things as they are… we see things as we are.”
The tension between the Three Laws of Robotics and the process toward technological innovation reflected in the “I, Robot” stories of Isaac Asimov illustrates the dilemma quite clearly. By definition, such a machine would be seriously flawed in its inability to sense in any meaningful way such things as moods or personality variations of its human operators. That gap likely would be filled by the programming choices of the creator of the machine, a person whose “connectedness” to humanity is suspect at best. This “solution” was dramatized prophetically in the original Star Trek episode “The Ultimate Computer” in which a computer was programmed with the brain waves [“N-grams”] of a scientist who turned out to be insane. The result was a disaster that cost many lives. In the modern context of the societal default to machines, this placement of such near-absolute trust in an entity that presents itself as “caring” could have disastrous consequences.
It is particularly in the modern medical context, whether of microsurgery or of life support systems, that the reality is that human beings are ever more routinely deferring to robots to allow impaired human beings to perform normal human tasks. This hints that the definition of the machine-man interface may be in the process of evolving beyond even a man-machine-man interface into what is sometimes referred to as “transhumanism.”  The word itself implies that human beings as unaugmented organisms may have entered a period of, at the least, obsolescence in the minds of some philosophers and engineers for some purposes. In its most basic manifestations, this is expressed by the use of scientific devices, whether chemical or mechanical, to extend to extraordinary extremes what would otherwise be normal human capabilities.
In this context a concern to be addressed should be the emerging technology of the so-called “microbots”. These are in reality microchips that can be grafted onto living brain cells with the objective of creating a “thinking machine.” To date, these microbots exist only in the laboratory in the brains of experimental rats, but it is projected that the modified creature might well be able to replicate itself and transmit its modifications through more or less normal reproductive processes with unforeseeable results. If such a concept should be transferred to human beings, then the machine-man interface would be blurred beyond definition, extending into a sort of neo-Kantian metaphysical context.  If this were to happen, then the ability of the mind of man to perceive that portion of himself that is machine might well cease to exist. Of course, it does require a serious stretch of the imagination for man to create a machine that so faithfully duplicates himself that mankind is no longer a viable form of existence. To be direct, if we are not careful, this could be the equivalent of harpooning ourselves and creating the “black swan” in the process. That situation would confirm the words of Albert Einstein that “It has become appallingly obvious that our technology has exceeded our humanity.”
Rather than abdicate trustingly to machines in this process, our society needs to consider Isaac Asimov’s First Law of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm.  This formulation clearly implies that there is a man-machine- man interface between the robot and the human being in which the human being was viewed as the primary component of the system. In 2009, Professors Murphy and Woods formulated the “Laws of Responsible Robotics.”  Their first law stated, “A human may not deploy a robot without the human- robot work system meeting the highest legal and professional standards of safety and ethics.” While this at least introduces an axiological analysis into the decision-making process, there remains the issue of what the machine might be capable of doing independently, that is, making a life-or-death decision, in the absence of close supervision by a human being. Woods said, “Our laws are little more realistic, and therefore a little more boring” and that “The philosophy has been, ‘sure, people make mistakes, but robots will be better – a perfect version of ourselves.’ We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways.” Unfortunately, these “laws” reflect the willingness of society to default to the machine, but do not address the limitations on technology implicit in the cyberethical problem. This issue of control over the device and the extent of the control by the human being remains at the heart of the debate about evolution to the transhumanism phase of the man-machine-man interface.
One example of the positive impact of transhumanism is the experimental use of the emerging psychotropic drugs to relieve the symptoms of PTSD in military personnel. Similarly, in March 2012, it was announced that, in the United Kingdom, a researcher had installed in his own arm a “telepathy chip” that, when connected to the nerves in his arm, allowed his brain to communicate wirelessly with a robotic hand that moved as his brain dictated. The robotic hand that was demonstrated had sensors that allowed it to pick up a glass gently and to put it down without breaking it. The life-altering benefit of transhumanism to someone who had lost an extremity due to misfortune is obvious.
All of these advances undoubtedly have positive potential uses that would greatly assist humans with disabilities and extend their lives almost indefinitely, at the same time improving quality of life. It takes little imagination to conceive of a cyborg designed to encase the body of a Stephen Hawking that can then be operated indefinitely wirelessly by his brain and “telepathically” operate other machines as well. The issue, however, is not whether this is possible, but should it be done? Put another way, does the ability of the machine to extend the limits of human physical existence and capabilities indefinitely in fact “injure” a human being by impacting the totality of his or her humanity in such a way as to violate the First Law?
When applied to currently intractable issues such as astronaut survival on extended interplanetary travel missions, such principles create the appearance of a potential for accomplishing such tasks in the foreseeable future. With the aid of such a technology, however, it would be possible for a genetically engineered astrophysicist to embark from the 21st Century upon an extended voyage to verify or refute new theories concerning the universe. Indeed, with such “telepathic” chips, it may well be possible to design a spacecraft that would not only protect the pilot, but would be operated as an extension of the mind of the astronaut to the extent that the two during their voyage together would enjoy a certain symbiosis. Training for such a mission would be quite different from what has gone before, if only because the distinctions in the man-machine- man interface thus created would be severely blurred. Yet, while the machine might well be able to do the task assigned to it by the human brain, the question would inevitably arise as to what the machine might do if the brain were to cease to direct it for whatever reason. On a practical level, upon his/her return to Earth many years, if not centuries, later, would the astrophysicist be able to survive in the undoubtedly altered environment of Earth? As a society, it would be important to examine the various possible unintended consequences of such a technological leap.
The technological revolution initiated by the space programs of the major powers in the latter half of the 20th Century created not only an epochal mechanical impact on society, but also led to the philosophical shifts in how mankind views itself in relation to the machine. It is not so much the question of whether society will continue to create new and more complex machines that can think with increasing levels of independence. It is a “given” that this will be the case. The issue is rather the inclusion in the programming and design of those machines basic concepts of right and wrong, morality and immorality that have stood the test of time. If, at some point, technology is allowed to assume a principal position in the educational process as it relates to human beings, there needs to be this sensitivity. Only in the “connectedness” between the machine and the man, in whatever interface relationship might exists, is there validity in that process. In this way, human society can create a safeguard against a technology that might on its own initiative, unrecognized by its creators, decide that human beings are so severely flawed in their mental and emotional capacities that the planet should be “cleansed” of them by the more precisely logical machine in a destructive application of the First Law. When he said, “I was dreaming”, Sonny, the lead character in the movie I, Robot, showed that there may well be an objective in the advancement of technology that does not leave humanity either in the position of Captain Ahab or the victim of a black swan.
 This concept was originally articulated by the author in the paper “The Terminator Missed a Chip!: Cyberethics”, presented at the International Astronautical Congress of 1995, Oslo and originally published by the American Institute of Aeronauticsand Astronautics, Inc. with permission. Released to IAF/AIAA to publish in all forms. The corollary is the ability of technology to drive alterations in those conventions without regard to human input in a societal “default” to the machines.
 Lapie, Logique de la volonté, Paris: F. Alcan.
 von Hartmann, Grundriss der Axiologie, Hermann Haacke (1908).
 J. Engebretson, “Data, Data, Everywhere”, Baylor Arts and Sciences (Fall 2018), 24.
 Agricola (98), Book 1, paragraph 21.
 Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993)
 A practical statement of the relationship, or lack thereof, between reality and scientific analysis, named for a prominent physicist, with whom the author is personally acquainted.
 The defense attorney, Gerald Powell, Esq., left the private practice of law shortly afterward and became a distinguished professor of trial law at Baylor University School of Law.
 See M. Reynolds, “Courts and lawyers struggle with growing prevalence of deepfakes”, ABA Journal (Trial and Litigation), June 9, 2020.
 This is the Sleepless in Seattle Rule articulated in the context of airline scheduling criteria, but its applicability in many aspects of the modern world is undoubted.
 A “black swan” is an event that is beyond what is normally expected of a situation, hence “unpredictable”, and has potentially severe consequences. Black swan events are characterized by their extreme rarity, severe impact, and the practice in hindsight of explaining widespread failure to predict them as simple folly. The term was popularized by Nassim Nicholas Taleb, a finance professor, writer, and former Wall Street trader, but it since has been applied to a much wider range of planning and prediction constructs.
 Report of the Columbia Accident Investigation Board, August 2003,Volume 1, p. 73.
 Id., at 9.
 As defined in the Oxford Dictionary, “transhumanism” is the belief or theory that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology.
 Cf. Kant, Critique of Pure Reason, 1781.
 The Three Laws of Robotics (often shortened to “The Three Laws”) are a set of rules devised by Isaac Asimov. They were introduced in his 1942 short story “Runaround“, although they were foreshadowed in a few earlier stories. The Three Laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second
 In the July/August 2009 issue of IEEE Intelligent Systems, Robin Murphy (Raytheon Professor of Computer Science and Engineering at Texas A&M) and David D. Woods (director of the Cognitive Systems Engineering Laboratory at Ohio State) proposed “The Three Laws of Responsible Robotics” as a way to stimulate discussion about the role of responsibility and authority when designing not only a single robotic platform but the larger system in which the platform operates. The laws are as follows:
1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. [This law posits that the “system” is a “human-robot” work system, and does not contemplate a “robot-human” system.]
2. A robot must respond to humans as appropriate for their [It is not clear what the antecedent for the word “their” is; hopefully, it is “humans”.]
3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second [A “sufficient situated autonomy” has too much flexibility when contrasted with the requirement of a “smooth transfer of control”.]
Submitted to the College of The State Bar of Texas, 2020.
© 2020 by John McClellan Marshall
All Rights Reserved
Not many Texas lawyers probably have had the interesting experience of being nearly naked on the first floor of a federal courthouse. I recently did. And what an adventure it was!
The news coming out of New York recently should have caused lawyers everywhere to collectively shudder: a paralegal working at the U.S. Attorney’s office in New Jersey was indicted on witness tampering, obstruction of justice, and conspiracy charges in Brooklyn federal court.