Would not God discover this? For he knows the secrets of the heart.

are Machines we create… being used to find “fraud”(who owns what?)… imprisoning us in the “death” of “virtual” in a “world” defined only by understanding in collective human will ?

could “fraud” be used as the reason to “mark” every human on the planet?

what is unethical human experimentaion of the “century”?

could all this be about “possesion”….of humans…. of our soul… of our life in us… to be treated like “property” inside “property”?

“Do not turn to mediums or necromancers; do not seek them out, and so make yourselves unclean by them: I am the Lordyour God.

HUmans are startlingly bad at detecting fraud. Even when we’re on the lookout for signs of deception, studies show, our accuracy is hardly better than chance.



Each year, the Association of Certified Fraud Examiners conducts a study of known scammers. It looks at demographic information, distinguishing characteristics, and patterns of approach in order to gain insights on the types of people most likely to commit fraud in the future.

The most widely anticipated approach, however, involves watching what goes on inside the brain. At the University of Pennsylvania, an associate professor of psychiatry named Daniel Langleben studies the ways in which neural activity can signify lying. Langleben hypothesizes that suppressing the truth requires additional cognitive operations that can be detected by fMRI. He also looks for so-called concealed information, which indicates that people know something they shouldn’t: Does your brain scan show that you recognize a fraud victim, for instance, after you said you don’t know him? In a forthcoming paper, Langleben and his team report that the fMRI-based method outperformed traditional polygraphy by at least 14 percent.


Dark Psychology

Dark Psychology is the study of the human condition as it relates to the psychological nature of people to prey upon other people motivated by criminal and/or deviant drives that lack purpose and general assumptions of instinctual drives and social sciences theory. All of humanity has the potentiality to victimize humans and other living creatures. While many restrain or sublimate this tendency, some act upon these impulses.

Dark Psychology seeks to understand those thoughts, feelings, perceptions, and subjective processing systems that lead to predatory behavior that is antithetical to contemporary understandings of human behavior. Dark Psychology assumes that criminal, deviant, and abusive behaviors are purposive and have some rational, goal-oriented motivation 99% of the time. It is the remaining 1%, Dark Psychology parts from Adlerian theory and the Teleological Approach.

Dark Psychology postulates there is a region within the human psyche that enables some people to commit atrocious acts without purpose. In this theory, it has been coined the Dark Singularity.

Dark Psychology posits that all humans have a reservoir of malevolent intent towards others ranging from minimally obtrusive and fleeting thoughts to pure psychopathic deviant behaviors without any cohesive rationality. This is called the Dark Continuum. Mitigating factors acting as accelerators and/or attractants to approaching the Dark Singularity, and where a person’s heinous actions fall on the Dark Continuum, is what Dark Psychology calls Dark Factor.

Dark Psychology encompasses all that makes us who we are in relationship to our dark side. All cultures, faiths, and humanity have this proverbial cancer. From the moment we are born to the time of death, there is a side hidden within us that some have called evil and others have defined as criminal, deviant, or psychopathic. Dark Psychology introduces a third philosophical construct that views these behaviors different from religious dogmas and contemporary social science theories.

Dark Psychology assumes there are people who commit these same acts and do so not for power, money, sex, retribution, or any other known purpose. They commit horrid acts without a modus operandi. Simplified, their ends do not justify their means. There are people who violate and injure others for the sake of doing so. Within in all of us is this potential. A potential to harm others without cause, explanation, or purpose is the area explored. Dark Psychology assumes this dark potential is incredibly complex and even more difficult to define.


Cyberstealth is a concept formulated along with iPredator and is a term used to define a method and/or strategy by which iPredators use Information and Communications Technology (ICT) , if they so choose, to establish and sustain complete anonymity while they troll and stalk a target. Cyberstealth is a methodology entrenched in Information Age Deception or also called cyber deception.

Given the Internet inherently affords everyone anonymity, Cyberstealth designed by iPredators range from negligible to highly complex and multi-faceted. The rationale for using “stealth” in the suffix of this term, serves to remind ICT users the primary intent fueling iPredators. This intent is to hide their identity by designing false online profiles, identities, covert tactics and methods to ensure their identities remain concealed reducing their probability of identification, apprehension, and punishment.

Unlike classic deception used by traditional criminals and deviants, online deception completely relies on the anonymity and “veil of invisibility” available to all ICT users. The primary difference between Information Age deception and Cyberstealth are the activities iPredators and ICT users engage in. In this writer’s construct, Cyberstealth is reserved for iPredators who actively plan a strategy that have criminal, deviant, and harmful implications to targeted victims. Information Age deception includes all forms of Cyberstealth, but also includes deceptive practices that do not have elements of crime, defiance, or harm against others.

Cyberstealth is a covert method by which iPredators are able to establish and sustain complete anonymity while they engage in ICT activities planning their next assault, investigating innovative surveillance technologies or researching the social profiles of their next target. When profiling or conducting an investigation of an iPredator, their level of Cyberstealth complexity, digital footprint, victim preferences, ICT skills, and behavioral patterns are used to identify who they are.


“Soon enough, pseudonymity and anonymity will only exist online; in the real world…they’ll be more or less extinct.” The hunt for the Boston bombers is to the coming world of surveillance as a 1980s PC is to a modern server farm. Facial recognition, gaitrecognition, drones the size of dragonflies — all here already. Just imagine twenty years from now. Every step you take outside will automatically be tracked, indexed, and correlated to all of your previous activity ever.


Encryption and anonymity tools are an important check on the widespread and inappropriate use of information controls to undermine human rights. Standardized integration and adoption of these tools in digital communications is one of the few methods available to civil society to protect itself in an environment in which digital surveillance and espionage is ongoing and largely tolerated—if not perpetrated and mandated—by governments.

who is “allowed” to to know of or use “tools”… ?






are you in a denial cycle with “technology”?

can you question it like a human… imagine it as a person of which you are talking to? what would you ask?


Psychopaths, most of the time, enjoy prosperous careers. They fully understand the mechanics of human emotion, although they are not able to feel them on their own. Talented in manipulating humans, sociopaths work hard in order for their co-workers to like them.

A famous example of a psychopath assassin is Dennis Rader, known to many as the BTK (Bind, Torture, Kill) killer. He worked for the US Air Force, as well as a home security company. Truly charismatic, he went on to become the president of the Christ Lutheran Church in Wichita.

Social Relationships

Psychopaths and sociopaths approach social relationships differently as well. Sociopaths find it difficult to maintain normal relationships, given their disorganized natures. And when they do foster bonds, they only maintain relationships that can benefit to them. More often than not, sociopaths tend to be ‘social predators,’ forming parasitic relationships off their companions

As for Psychopaths, they have the ability to maintain normal relationships, since they appear charming to most of their families and friends. The harmony of the relationships, however, is oftentimes superficial.

are psychopaths using Machines to make and control sociopaths who use “tech” to build pathocracys?

Psychopaths are individuals who demonstrate risky behavior, as well as the inability to follow social norms. They exhibit extreme temperaments, ranging from fearlessness to impulsivity. Apart from suffering from anti-social personality disorder, psychopaths are known to be delusional. Conscience and empathy are some of the common traits they lack.

Sociopaths, on the other hand, feature relatively normal temperaments. They are easily agitated, and oftentimes nervous. While such individuals can be attached to an individual or a collective, they disregard the concept of society as a whole.

The symptoms of sociopaths arise from sociological aspects that have affected them negatively when they were young. These factors include poverty, aberrant peers, parental neglect, to name a few. To wit, nurture or environment contributes to a person’s sociopathic behavior.

High intelligence is often seen in sociopaths, although those with low IQs can be sociopathic as well.

Sociopaths are pathological liars who have no problems telling false claims. In fact, most sociopaths can pass their way out of a lie detector test. To wit, it is impossible to get the truth out of a sociopath.

Sociopaths have dominant personalities, meaning that they do not like to lose in any competition – big or small. They hate losing in an argument, and they will lie their way through even if their claims make no sense at all. And when it is proven that they are wrong, they will never apologize.

They are charming speakers who can deliver seemingly-hypnotic speeches. They can deliver masterful stories that will leave the audience in awe.




Cyberbully Minds, Bullying,

Cyberbullying Psychodynamics

The Cyberbully Mind and a brief introduction to the Psychodynamics of Cyberbullying are presented. Cyberbullying is defined as the use of Information and Communication Technology (ICT), by a minor, to verbally and/or physically attack another minor, who is unable or unwilling to deescalate the engagement. Given that the vast majority of this abuse occurs in cyberspace, the factors, drives and motivations for cyberbullying are explored.

Bullying, or classic bullying, is a term used to define recurrent and sustained verbal and/or physical attacks by one or more child(s) towards another child who is unable or unwilling to deescalate the engagement. It may involve verbal harassment, physical assault, coercion, intimidation, humiliation and taunting. Bullying is comprised of a combination of five types of pediatric abuse: social, sexual, emotional, verbal and physical.

Bullying requires both the assailant and target to be minors. Adult forms of bullying are termed Harassment, Stalking & Slander. Despite variants in definition, bullying involves abuse between two or more minors. Classic bullying requires face- to-face interactions within the repertoire of aggressive behaviors.

Cyberbullying and the cyberbully are terms used to define recurrent and sustained verbal and/or physical attacks, by one or more children towards another child, who is unable or unwilling to deescalate the engagement using Information and Communication Technology (ICT). Like classic bullying, the cyberbully engages in harmful, repeated and hostile behavior intended to deprecate a targeted child. Cyberbullying describes threatening or disparaging communications delivered through ICT. Whereas classic bullying involves face-to-face interactions and non-digital forms of communication, cyberbullying consists of information exchanged via ICT and may never involve face- to-face encounters.

By definition, classic & cyberbullying occurs among young people. When an adult is involved as the aggressor, it meets criteria for cyber harassment or cyberstalking, which in many states is a criminal act. Although the terms bullying and cyberbullying includes adult intimidation behavior in contemporary culture, these describe pediatric behaviors and will not include adult applications in this manuscript.


Cyber Harassment & Cyberstalking Laws by State Link

Children of the 21st century are targeted via classic bullying, the cyberbully or a combination of the two. Given the evolution of digital technology, growth of the internet and its relevance to the human experience, cyberbullying has reached epidemic proportions among the pediatric segments of society and becoming a permanent weapon in the toolbox of pediatric aggressors. At the core of all bullying, cyber and classic, are victimization, disparagement and abuse of a targeted child. Child abuse, whether perpetrated by a child or adult, is detrimental to all aspects of their psychological and developmental maturationfollowing them into adulthood and throughout their lifespan.

Children traumatized by abuse and victimization have higher rates of all negative psychological and sociological aspects of the human condition ranging from alcohol & drug abuse, criminal involvement, domestic abuse and psychiatric illnesses. With the advent of ICT, children are by far more susceptible to the nefarious, criminal and deviant aspects ICT offers humanity. Although ICT offers incredible benefits to society, children are the demographic segment that is most impacted by the Dark Side of Cyberspace.

Being the richest man in the cemetery doesn’t matter to me. Going to bed at night saying we’ve done something wonderful, that’s what matters. Steve Jobs (1955-2011)

In the United States, October has been marked every year as National Crime Prevention Month, National Bullying Prevention Month & National Cyber Security Month. Clearly, America has to recognize the adverse societal outcomes if cyberbullying is not addressed immediately. Given the complexity of cyberbullying, religious organizations, educational systems and communities must work together to initiate and sustain a concerted effort.

A Canadian educator, Bill Belsey, in 2008 coined the term, cyberbullying, defining it as,“involving the use of information and communication technologies such as e-mail, cell phone and pager text messages, instant messaging, defamatory personal websites, and defamatory online personal polling websites, to support deliberate, repeated, and hostile behavior by an individual or group, that is intended to harm others.

Since introducing this term, cyberbullying now includes all ICT and has flourished to all industrialized nations. Because of this alarming reality and projected negative societal impact, if not addressed, this writer will analyze cyberbullying from his theoretical concepts of Dark Psychology & iPredator.

Dark Psychology

Dark Psychology is the study of the human condition as it relates to the psychological nature of humanity’s potential to prey upon others. Motivating this potential are criminal and/or deviant drives that lack purpose and cannot be explained by evolutionary instinctual drives and social sciences theory. All of humanity has this potential to victimize other humans and living creatures. While most restrain or sublimate this tendency, some act upon these impulses.

Dark Psychology seeks to understand those thoughts, feelings, behaviors,phenomenological and subjective processing systems that lead to predatory behavior that is antithetical to contemporary understandings of human behavior. Dark Psychology assumes that criminal, deviant and abusive behaviors are purposive and have some rational, goal-oriented motivation 99% of the time. It is the remaining 1%, Dark Psychology parts from Adlerian theory and Teleology. Dark Psychology postulates there is a realm within the human psyche that enables some people to commit atrocious acts without purpose. The contingent of humanity that uses ICT to harm and victimize others has been coined iPredator, which Dark Psychology also investigates.


iPredator is a new construct developed by this writer to describe those, children and adults, who use ICT to assault, victimize and steal from others. Based on this writer’s hypothesis, 80-85% of cyberbullies meets the requirements of iPredator and defined below:

In relationship to cyberbullying, this writer along with developmental experts and philosophers, views bullying as driven by a need for control and domination, perceived by a child, that his/her actions will lead to greater peer acceptance and recognition. Alfred Adler(1870-1937), postulated that all people, feeling encouraged, concurrently feel proficient, appreciated and will behave in a connected and cooperative way. When discouraged, humans act in unhealthy ways by competing, withdrawing or giving up. It is in finding ways of expressing and accepting encouragement, gaining respect, and practicing Social Interest that help people to feel fulfilled and optimistic.

Adlerian theory and practice has proven especially relevant applied to the growth and development of children. A disciple of Alfred Adler, Rudolf Dreikurs [1897-1972], stated a misbehaving child is a discouraged child and that helping children to feel valued, significant and competent is often the most effective strategy in coping with difficult child behaviors. As this writer strongly supports many of his tenets, Adler’s theory would define a bully or cyberbully as compensating for deep feelings of inferiority. Inferiority is universal in all children and is the proclivity to feel smaller, weaker and less socially & intellectually competent than the adults around them.

Adler suggested if one observes children’s games, toys and fantasies, they tend to have one thing in common: the desire to grow up, to be big and to be an adult. This kind ofcompensation is identical with striving for perfection. Many children, however, are left with the feeling that others will always be better than they are. These psychic experiences of feeling less than, compounded by striving to feel superior and accepted by others, are the elements that lead a child to harass and taunt other children. From Adler’s theoretical tenets, it becomes plausible to see why children engage in abusive actions towards other children knowing their actions are causing the target child distress.

Although highly detrimental to the targeted victim, Dark Psychology assumes the aggressor child’s purpose for their behavior, using Adlerian theory regarding “need for acceptance”, as a primary motivating force for their behavior, is a viable explanation. When the aggressor child’s internal experiences and perceptions moves into the area of feeling gratification, power, dominance and control, without care or thought of the target child’s well-being, Dark Psychology defines this psychological state as deviant, narcissistic, anti-social and psychopathological. These behaviors may blossom into serious aggressive and/or criminal behavior in adulthood if not squelched or addressed.

Regarding this writer’s construct of iPredator, cyberbullying falls within the iPredator definition when the aggressor(s) is fully aware of his/her intent, but continues in his/her abusive pattern despite being fully aware that he/she are causing the target child significant distress. In order to classify a child as an iPredator, they must know their behaviors are causing anguish in a target child.

The actual percentage of cyberbullying that occurs without the aggressor’s knowledge of causing a target child anguish would be very difficult to compile with high certainty. Many children do inadvertently insult and deprecate other children online without knowing they are doing so thinking they are being humorous and clever. All present estimates on a child’s modus operandi for bullying another child has been derived through interviews and self-report with no accurate way of confirming their honesty. Of those children who are not aware of their abusive actions, they are not included as iPredators or defined using Dark Psychology tenets.

Included in this writer’s two concepts of Dark Psychology and iPredator are those children that are fully aware of their abusive behaviors, but continue to target the victim. There are two sub-groups of children, that iPredator and Dark Psychology addresses as they meet each concept’s criteria. As part of this writer’s pediatric cyberbullying construct, Cyberbully Triad, these two groups are called Righteous Cyberbully & Narcissistic Cyberbully.

The first group of cyberbullies, Righteous Cyberbully, are aware of their actions, understands they are causing the target child distress, understand their actions are wrong, but continues to do so because they believe the target child deserves their assaults. The Righteous Cyberbully feels warranted in his/her actions for reasons including:

I. The target child offended or abused the aggressor in the past from bullying or an isolated aggressive event.

II. The target child offended or abused a peer or loved one close to the aggressor in the past from bullying or an isolated aggressive event.

III. The target child offended the aggressor’s belief system due to their race, religious affiliation, physical presentation, socio economic status, sexuality and any other aspects the aggressor deems offensive, immoral or unjust.

Of all types of cyberbullies, the most concerning and potentially dangerous segment are those children engaged in cyberbullying with full knowledge of their actions, understand the distress they are causing the target child and continue their assaults motivated by sheer malevolent intent. This segment in the Cyberbully Triad is called the Narcissistic Cyberbully. Unlike the group just described, motivations for this group, by blaming the target child or feeling justified based on the target child’s genetic and cultural make up, are not experienced.


These children may verbalize to their peers they are inflicting their wrath upon the victim for reasons described above, but in reality, they are not motivated by these reasons, but by sheer enjoyment of inflicting abuse upon others. Of the total population of cyberbullies, this segment of children are the smallest, but most dangerous to society. Children within this group are the future sociopaths, criminals and psychopaths, which victimize and inflict pain on others as adults, devoid of remorse.

Society will never mandate all children to be evaluated for antisocial and narcissistic personality disorder tendencies, nor is this writer encouraging mandatory assessments for all children. What this writer is pointing out is that cyberbullying is an immoral and destructive behavior that causes the target child serious distress and psychological wounds that can last the rest of their lives. Whether the aggressor is ignorant to their cyberbullying, feels justified by their distorted belief systems, or at the beginning stages of being a future narcissist or sociopath, society must treat all children as having the same potential both as aggressor and as victim.

It is for this reason of never knowing the impact cyberbullying will have upon the aggressor, victim and the community, that consistent and regular education cannot be encouraged enough. The vast majority of children who are cyberbullying and being cyberbullied rarely disclose this information to parents or teachers. As this writer, along with all citizens of the world thrive at the beginning of the Information Age, answers to the questions of the societal impact of cyberbullying will be addressed by future generations.

Although briefly discussed above, the definitions of iPredator, iPredator Bridge, Cyberstealth and Dark Psychology are provided below. Also Provided here are links and descriptions to our other site cyberbullying pages, links to iPredator Inc.’s cyber bullying Pinterest boards and direct links to our internet safety tools available to purchase.

What will become of today’s cyberbully tomorrow? They have grown into adults and spent their entire lives learning and carrying out how to best devastate other using Information and Communications Technology. Michael Nuccitelli, Psy.D. (2011)




Over the past 10 years the nature of fraud has become more sophisticated and systematized. Gone are the days of the lone wolf hacker seeing what they could get away with. Today, those days seem almost simple. Not that I should be saying it, but fraud and the people who perpetrated it had a cavalier air about them, a bravado. It was as if they were saying, in the words of my good friend Frank Abagnale, “catch me if you can.”


In the ongoing scrum over cell phone privacy, there are at least two major fields of play: phone-data encryption, in which, right now, Apple is doing its best not to share its methods with the government; and network security, in which the police and the military have been exploiting barn-door-size vulnerabilities for years. And it’s not just the government that could be storming through. The same devices the police used to find one low-rent tax fraudster are now, several years later, cheaper and easier to make than ever.

“Anybody can make a StingRay with parts from the Internet,” Rigmaiden tells me, citing a long litany of experiments over the years in which researchers have done just that. “The service provider is never going to know. There’s never any disruption. It’s basically completely stealth.” In the coming age of democratized surveillance, the person hacking into your cell phone might not be the police or the FBI. It could be your next-door neighbor.

No one wants to fix the problem—they exploit the vulnerability, too

The StingRay arrived a few years later—an update of Triggerfish designed for the new digital cellular networks. The first clients were soldiers and spies. The FBI loves IMSI catchers—“It’s how we find killers,” Director James Comey has said—even if last fall, under pressure after Rigmaiden’s case and others became public, the Justice Department announced that the FBI would, in most cases, need warrants before using them.

Most local police departments, though, still aren’t bound by that directive. Neither are foreign governments, which are widely suspected to be using IMSI catchers here (as we are no doubt doing elsewhere). And so, amid the publicity over the StingRay, a marketplace has opened up for countermeasures. On the low end, there’s SnoopSnitch, an open source app for Android that scans mobile data for fake cell sites. On the high end, there’s the CryptoPhone, a heavily tricked-out cell phone sold by ESD America, a boutique technology company out of Las Vegas. The $3,500 CryptoPhone scans all cell-site signals it’s communicating with, flagging anything suspicious. Even though the CryptoPhone cannot definitively verify that the suspect cell is an IMSI catcher, “we sell out of every CryptoPhone we have each week,” says ESD’s 40-year-old chief executive officer, Les Goldsmith, who has marketed the phone for 11 years. “There are literally hundreds of thousands of CryptoPhones globally.” ESD’s dream clients are nations. Last year the company debuted a $7 million software suite called OverWatch, developed with the German firm GSMK. OverWatch, ESD says, can help authorities locate illegal IMSI catchers using triangulation from sensors placed around a city. “Right now, it’s going into 25 different countries,” Goldsmith says.



On a parallel track to the defense market, hobbyists and hackers have gone to work on the cell networks and found they can do a lot of what Harris can. In the early days of cell phones, when the signals were analog, like radio, DIY phone-hacking was a cinch. Anyone could go to a RadioShack and buy a receiver to listen in on calls. Congress grew concerned about that and in the 1990s held hearings with the cellular industry. It was an opportunity to shore up the networks. Instead, Congress chose to make it harder to buy the interception equipment. The idea was that when digital mobile technology took hold, intercepting digital signals would be just too expensive for anyone to bother trying. That turned out to be more than a little shortsighted.

For as long as you’ve been using a phone on a 2G (also called GSM) network or any of its digital predecessors, your calls, texts, and locations have been vulnerable to an IMSI catcher. In 2008 researcher Tobias Engel became the first to demonstrate a crude homemade IMSI catcher, listening to calls and reading texts on a pre-2G digital cell network. Two years later, at a DEF CON hacking conference in Las Vegas, researcher Chris Paget monitored calls made on 2G with a gadget built for just $1,500. What made it so cheap was “software-defined radio,” in which all the complicated telecommunications tasks aren’t pulled off by the hardware but by the software. If you couldn’t write the software yourself, someone on the Internet had probably already done it for you.

Phones now operate on more sophisticated 3G and 4G (also known as LTE) networks. In theory, IMSI catchers can pinpoint only the location of these phones, not listen to calls or read texts. But none of that matters if the IMSI catcher in question can just knock a phone call back down to 2G. Enter Harris’s Hailstorm, the successor to StingRay. “It took us a while to stumble onto some documents from the DEA to see that the Hailstorm was a native LTE IMSI catcher,” the ACLU’s Soghoian says. “It was like, ‘Wait a second—I thought it’s not supposed to work on LTE. What’s going on?’ ”

They found a hint to the answer last fall, when a research team out of Berlin and Helsinki announced it had built an IMSI catcher that could make an LTE phone leak its location to within a 10- to 20-meter radius—and in some cases, even its GPS coordinates. “Basically we downgraded to 2G or 3G,” says Ravishankar Borgaonkar, a 30-year-old Ph.D. who has since been hired at Oxford. “We wanted to see if the promises given by the 4G systems were correct or not.” They weren’t. The price tag for this IMSI catcher: $1,400. As long as phones retain the option of 2G, calls made on them can be downgraded. And the phone carriers can’t get rid of 2G—not if they want every phone to work everywhere. The more complex the system becomes, the more vulnerable it is. “Phones, as little computers, are becoming more and more secure,” says Karsten Nohl, chief scientist at Security Research Labs in Berlin. “But the phone networks? They’re rather becoming less secure. Not because of any one action but because there’s more and more possibility for one of these technologies to be the weakest link.”

The device Borgaonkar’s team built is called a “passive receptor,” a sort of budget StingRay. Instead of actively targeting a single cell phone to locate, downgrade to 2G, and monitor, a passive receptor sits back and collects the IMSI of every cell signal that happens by. That’s ideal for some police departments, which, the Wall Street Journal reported last summer, have been buying passive devices in large numbers from KEYW, a Hanover, Md., cybersecurity company, for about $5,000 a pop. One Florida law enforcement document described the devices as “more portable, more reliable and ‘covert’ in functionality.” If all you want to do is see who’s hanging out at a protest—or inside a house or church or drug den—these passive receptors could be just the thing.

A programmer I spoke with who has worked for Harris is of two minds about what the hobbyists are up to. “There’s a giant difference between do-it-yourself IMSI catchers and something like the Harris StingRay,” he says proudly. That said, he’s taken with how fast the amateurs are catching up. “I’d say the most impressive leap is the advancement of LTE support on software-defined radio,” he says. “That came out of nowhere. From nothing to 2G took, like, 10 years, and from 2G to LTE took five years. We’re not there yet. But they’re coming. They’re definitely coming.”

You don’t have to look far to see what a world of cheap and plentiful IMSI catchers looks like. Two years ago, China shut down two dozen factories that were manufacturing illegal IMSI catchers. The devices were being used to send text-message spam to lure people into phishing sites; instead of paying a cell phone company 5¢ per text message, companies would put up a fake cell tower and send texts for free to everyone in the area.

Then there’s India. Once the government started buying cell-site simulators, the calls of opposition-party politicians and their spouses were monitored. “We can track anyone we choose,” an intelligence official told one Indian newspaper. The next targets were corporate; most of the late-night calls, apparently, were used to set up sexual liaisons. By 2010 senior government officials publicly acknowledged that the whole cell network in India was compromised. “India is a really sort of terrifying glimpse of what America will be like when this technology becomes widespread,” Soghoian says. “The American phone system is no more secure than the Indian phone system.”

In America, the applications are obvious. Locating a Kardashian (in those rare moments when she doesn’t want the media to locate her) is something any self-respecting TMZ intern would love to be able to do. “What’s the next super Murdoch scandal when the paparazzi are using a StingRay instead of hacking into voicemail?” Soghoian says. “What does it matter that you can build one for $500 if you can buy one for $1,500? Because at the end of the day, the next generation of paparazzi are not going to be hackers. They’re going to be reporters with expense accounts.”

Over coffee after court in Annapolis, Soghoian and I peruse the Alibaba.com marketplace on his smartphone. He types in “IMSI catcher,” and a list materializes. The prices are all over the place, as low as $1,800. “This one’s from Nigeria. … This one’s $20,000. … This one’s from Bangladesh.” I note that the ones on sale here seem to work only on 2G, unlike the Hailstorm. “You can get a jammer for like 20 bucks,” Soghoian says. With that, you roll any call back to 2G. Pair the signal jammer with a cheap old IMSI catcher, and you’ve got a crude facsimile of a Hailstorm.

Every country knows it’s vulnerable, but no one wants to fix the problem—because they exploit that vulnerability, too. Two years ago, Representative Alan Grayson (D-Fla.) wrote a concerned letter to the Federal Communications Commission about cellular surveillance vulnerabilities. Tom Wheeler, the former industry lobbyist who now runs the regulatory agency, convened a task force that so far has produced nothing. “The commission’s internal team continues to examine the facts surrounding IMSI catchers, working with our federal partners, and will consider necessary steps based on its findings,” says FCC spokesman Neil Grace.

Soghoian isn’t optimistic. “The FCC is sort of caught between a rock and a hard place,” he says. “They don’t want to do anything to stop the devices that law enforcement is using from working. But if the law enforcement devices work, the criminals’ devices work, too.” Unlike the battle between the FBI and Apple, the network-vulnerability struggle doesn’t pit public sector against private; it’s the public sector against itself.

From his apartment in central Phoenix, Rigmaiden consulted with the Washington state branch of the ACLU when it helped draft the state law requiring a warrant for the use of IMSI catchers. He’s suing the FBI for more StingRay documents, and recently the court shook loose a few more. And now that his parole is over and he can travel, he’d like to lecture across the country about fighting surveillance. “Everything that I thought was wrong back then is even worse today,” he says, chuckling softly. “The only thing that’s changed is now I’m going to do the other route—which is participate and do what I can to try to change it.”

As improbable a privacy standard bearer as Rigmaiden may be, his ability to draw inferences and connect dots proved useful once; maybe it will again. He has dug up the specs of some KEYW passive devices, and he sees no reason the big companies like Harris aren’t already miles beyond that now. “Every beat cop, every police car on every police force is going to have one of these passive interceptors in the car or on their utility belt,” Rigmaiden says. For surveillance to become truly democratized, he reasons, “it has to be as easy as installing an app on your phone. I think somebody somewhere would have to decide, I’m going to make this easy for people to do. And then they’d do it.”

He’s hardly alone in this view. “The next step for the technology is to go into the hands of the public, once it gets cheap enough,” says Jennifer Lynch, a staff attorney at the Electronic Frontier Foundation. “Companies are always going to try to find new markets for their technologies. And there are lots of people who want to spy on their neighbors or their spouses or their girlfriends.”

Meanwhile, apart from IMSI catchers, a whole other vulnerability has been exposed: Companies such as Verint Systems and Defentek have produced devices that exploit a huge security hole in SS7 (short for Signaling System 7), the network that interconnects every cellular provider around the world. Using SS7, researchers on laptops have been able to pinpoint the location of a particular cell phone anywhere in the world—and even intercept calls. The attacker does leave an IP address as a trace. “But if that IP address leads somewhere like Russia or China,” says Tobias Engel, who cracked SS7 in a 2014 demonstration in Hamburg, “you really don’t know much more.” The industry lobbying group CTIA–The Wireless Association maintains that SS7 is more secure in America than in Europe. “Outside the U.S., the networks are more fragmented, not as homogeneous,” says John Marinho, who runs the group’s cybersecurity working group.

Goldsmith of ESD—which has developed another multimillion-dollar software package, called Oversight, aimed at warding off SS7 attacks—disagrees. “That’s comical,” he says. “I can tell you we performed tests on U.S. carriers, and they’re just as vulnerable as anyone else.”

What fascinates Rigmaiden the most—and what sometimes makes him want to go live in the woods again—is how no matter what happens with Apple’s battle, the cell phone network problem may be with us for as long as there are networks. “This isn’t something that can really be fixed,” he says. “It’s just built into the way communications work. You can always zero into one signal among many signals, if you have enough data. You don’t need to hack anything—just analyze the signals in the air.”


Big Brother is watching your every move — and so is your spouse. As global positioning systems improve, so do the apps that track your movements, and that’s making it tougher than ever to keep a low profile.


Passwords are in the “something you know” category of the three types of authentication. Unfortunately, passwords can easily be stolen or guessed by fraudsters. The average user has a hard time developing a unique password for each of the 50 accounts, so they use easily guessed passwords or the same password for multiple accounts. Thus a fraudster who accesses one password-protected account can often access others by the same person. Passwords can also frustrate customers who don’t want to have to re-enter a password every time they want to access an account.

Behavioral analysis relies on a user’s unique behavior to build a profile of behavior that is normal. Suspicious behavior that strays too far from the norm can then be singled out as fraudulent. It can be used in many different ways to detect fraud via online channels.

Behavioral Analysis: The Future of Fraud Prevention


In the world of business, one word tends to make people cringe: fraud. This is understandable. Fraud cost businesses $16.3 billion in 2014. In addition to lost revenue from fraudulent purchases themselves, fraud also costs customer trust. Many customers who experience fraud will choose not to return to the business.

A large amount of this cost ($6.4 billion) in 2014 was card-not-present (CNP) fraud. CNP means exactly what it sounds like, purchases or payments made when a credit card does not have to be physically present. Transactions made via computer or smartphone are CNP transactions. Those made in a store by handing a credit card to a salesperson are not.

These types of transactions have exploded in popularity. The ease of online shopping has many customers making the switch from brick and mortar to CNP purchases. The benefits also mean the emergence of many online-only businesses. The result is a flourishing world of e-commerce.

Unfortunately, where the money goes, fraudsters follow. With the increasing popularity of shopping online and via mobile devices, e- and m-commerce fraud is increasing. Also, the recent adoption of EMV chip cards in the United States means fraudsters will have a harder time committing fraud in person. Thus fraudsters are expected to migrate to online and mobile fraud avenues. All in all, CNP fraud is expected to grow to 4 times that of fraud when a card is present by 2018.

Changes in how customers shop or access their money require changes in how businesses manage and prevent fraud. While fraudsters continue to think up new ways to commit fraud, many businesses continue to use the same fraud prevention methods.


Fraud vs. Friction

In the world of fraud prevention, the password remains king, even though Bill Gates declared in 2004 that the password is dead. Today, the average web user has over 50 accounts with each requiring a password. The popularity of password authentication poses a problem for customers and businesses alike.


Passwords are in the “something you know” category of the three types of authentication. Unfortunately, passwords can easily be stolen or guessed by fraudsters. The average user has a hard time developing a unique password for each of the 50 accounts, so they use easily guessed passwords or the same password for multiple accounts. Thus a fraudster who accesses one password-protected account can often access others by the same person. Passwords can also frustrate customers who don’t want to have to re-enter a password every time they want to access an account.

Authentication methods like passwords cause high friction for users. This increases checkout time, resulting in more abandoned shopping carts. Of course, businesses don’t want this, so they try to reduce friction by providing easy options like one-click payments. Often, reducing friction can also mean skimping on security and welcoming fraudsters in.

The best practice in online security today is two-factor authentication, which requires users to prove their identity twice. By providing two hurdles for fraudsters to jump instead of one, it makes a fraudster’s job harder. Thus it is less likely a fraudster can hack into an account.

Unfortunately, two step authentication often increases friction for users. For example, here’s how a user logs in to Gmail by Google:

Step 1: Enter a password.

Step 2: Enter a validation code sent to the user’s mobile device.

While two-factor authentication is more secure than simply entering a password, it comes with more friction, which detracts potential customers.

New methods of addressing fraud must balance security and friction. High security, low friction methods are needed to keep purchases secure while allowing customers to shop without becoming frustrated by lengthy checkout processes.

The Ever-Evolving Fraudster

In addition to the problem of friction, businesses have to fight the evolution of new fraud methods. New fraud prevention and detection methods must adapt as the way fraudsters commit fraud continues to change.

The traditional methods of fighting fraud are no longer enough to catch up with hackers. In 2014, many big companies like Target and Home Depot were targets of massive attacks. Customers and businesses alike began to wonder how they could protect themselves when a big company like Target couldn’t.


As one industry expert notes in an article by PYMNTS.com, “There’s a hacker for everything.” This means security isn’t achieved by one fraud prevention method. The key is to develop a security plan that addresses many different avenues fraudsters may use and stand up to new evolving attacks.

Fraud prevention and detection must enter the 21st century if businesses want to secure themselves. If not, businesses and customers alike can’t be sure they are protected. Businesses don’t want to realize they could have prevented the newest threat when it is already too late.

What is Behavioral Analysis?

With businesses looking to reduce friction and keep up with emerging fraud methods, new ways of detecting and preventing fraud are being developed. One of these 21st century methods is behavioral analysis.

Behavioral analysis relies on something inherent in a user: how they behave. What a user does may be a better indicator than who they say they are, which is the common question challenged by authentication.

Whether we realize it or not, our shopping habits reveal our unique behavior. For example, how often you shop, what time of the day you shop, and where you shop all reveal important information about your behavior. Retailers can use this information to tell what is authentically you and what is someone using your information to make a purchase.

Types of Behavioral Analysis

Behavioral analysis relies on a user’s unique behavior to build a profile of behavior that is normal. Suspicious behavior that strays too far from the norm can then be singled out as fraudulent. It can be used in many different ways to detect fraud via online channels.

Here is an example from SiftScience to show how behavioral analysis works:

User 1: Login → Click on Product #8473 → Click on Product #157 → Click on Product #102 → → Complete Purchase

User 2: Failed Login → Request Password → Direct Link to Product #821 → Change Shipping Address →Complete Purchase

Which behavior is suspicious?

The second one. While the first user successfully logs in and takes time to browse before making a purchase, the second user fails to log in, navigates to one product and changes the shipping address before completing the purchase. For behavior analysis, the second user would raise a red flag.

The amount of time a customer is likely to spend browsing, their browsing history, and how they browse are all online shopping behaviors that can be critical to preventing and detecting fraud.


Browsing behavior is an example of one type of behavior analysis. Another is physical behavioral data taken from devices themselves. Electronic devices themselves hold the ability to collect all sorts of data via sensors to tell businesses more about a user’s behavior. What this data is and how helpful it is at determining fraud varies.

”Just as when you touch something with your finger you leave behind a fingerprint, when you interact with technology you do so in a pattern based on how your mind processes information, leaving behind a ‘cognitive fingerprint’,” explains a contract document for research into behavioral biometrics at West Point, the U.S. Army’s military academy.

Here are some examples of behavioral data that can be collected via different electronic devices.

Computer Behavioral Biometrics:

-Mouse dynamics

-Typing speed

-Key pressure

-Navigation habits

-Swipe speed and distance


Smartphone Behavioral Biometrics:

-Speed, style, and position on screen of a signature

-Screen pressure

-Angle a user holds the phone

-Movement across a screen

-Typing rhythm

-Heart rate

-Skin conductivity

As you can see, mobile devices can collect a wider variety of rich data than computers. This data can piece together a picture of the user. For example, fraudsters tend to have a higher heart rate than a normal user.

Difference in heart rate between fraudster and legitimate users via Riskified.

In addition, measured skin conductivity tells how much a user’s hands are sweating. Fraudulent users tend to sweat more than authentic users, so this data can be used to identify whether a user is good or bad.

The History of Behavioral Analysis: From Static Biometrics to Behavior

In the quest to reduce fraud, technology businesses have developed new, better methods. Behavior analysis as a fraud security measure evolved from shortcomings of other methods.

Behavior isn’t the only biometric used. Physical biometrics like fingerprint, voice, eye, and face recognition involve inherent characteristics of a person. This is one of the longest used ways of identifying a person. Biometrics have been used by humans since 30,000 years ago when the first cave painting was signed with a hand print.


In recent times, the development of biometric methods has skyrocketed. In 2013, Apple came out with Touch ID for its Apple Pay application. This involves validating identity via fingerprint. Intel Security’s True Key uses facial scanning. Fujitsu developed a method to scan irises. They also developed a way to use palm veins to identify a user. Mastercard is expected to roll out a “pay by selfie” authentication method in 2016.  (“The Future of Consumer Authentication (And It’s A Little Weird)”)


Although physical biometrics seem like secure methods of user identification (someone can’t “steal” something that is a part of your physical body), there are a lot of problems associated with them. Different static biometrics have varying levels of security.

Fingerprints, for example, sound extremely secure. Fingerprint scanners are thought of as something straight out of sci-fi movie. The problem is people leave their fingerprints on everything. This makes this identifier a target for fraudsters to steal. Fingerprints can even be copied from public photos and used to hack into a person’s device.

Although not every physical identifier can be stolen, these biometric methods often require a user to take the time to validate themselves as a true user. They also tend to use only one data point: the scan of a fingerprint or ear, a photo. This can make it easy for a valid user to be shut off because “fraudster” or “no fraudster” is determined by one piece of information.

Thus the need for another biometric method evolved. The idea of using something a person is inherently is a good idea, but without the need for user friction and hardware outfitted with new sensors. Behavior is a more multi-faceted method of authentication.

The Ins and Outs of Behavioral Analysis

Here’s where we are so far with our thinking about behavioral analysis:

Each person behaves uniquely. A profile of that person’s behavior is recorded. A user who acts vastly different from the behavior profile is suspicious.

How exactly does this all work? What happens if a user behaves differently?

Businesses need a way to stop fraud before it happens. They also need to reduce the chance that a perfectly fine customer is turned away from making a purchase. Thus a system is needed to gather data and make an informed decision about whether a user is a fraudster and should be blocked from a purchase.

This is where machine learning comes in. Machine learning algorithms take the data gathered and determine patters to predict the probability that a purchase is fraudulent.

Here’s a simple outline of the steps of behavioral analysis to detect fraud:

  1. Gather loads of information to form a template of the user’s behavior and “train” the system
  2. A behavioral pattern is determined and a threshold is set to identify when behavior translates from normal to fraudulent. This threshold can be a probability percentage that a transaction is fraudulent (for example, 95%).
  3. When a user is encountered, a probability of the transaction being fraudulent is calculated based on behavior. If the percentage is above the threshold (95% in this example), the user is blocked from the transaction.

wikipedia biometrics

In essence, machine learning algorithms develop a pattern. Then a risk value is calculated using this pattern. If the risk is deemed high enough, there is a large chance the user is fraudulent. Then the user is either blocked from accessing an account or blocked from completing a purchase, depending on how the behavioral analysis is being used.

The risk threshold can be adjusted. Having too high of a threshold means potentially blocking out authentic users, while having too low of a threshold means fraudsters might not be detected.

The number of false positives varies depending on the behaviors analyzed and the accuracy of the technology used to gather data. In general, behavioral biometrics tend to have less false positives than other detection methods.

Pros of Behavioral Analysis

Besides decreasing the instance of false positives, there are many reasons behavioral analysis is a good choice for mitigating fraud.

-Behavioral methods gather large amounts of diverse data. For example, a smartphone that gathers behavioral information has many data points to evaluate fraud potential, while static biometrics have less information to go off of. This results in a richer profile of who an authentic user is and who a fraudulent user is.

As a Tech Radar article notes, “It’s like having your finger on the fingerprint sensor on your phone throughout the whole process.”

-It is frictionless and non-invasive. Behavioral biometrics are also called “passive biometrics” because users don’t have to do anything different for them to work. They don’t have to put their fingers over a certain button or speak into a microphone.

They only have to keep behaving as they always do. There’s no interference to the user and, therefore, no friction. In addition, it’s non-invasive, as security relies not on what you are doing on your phone but how you are doing it.

-Behavioral analysis can detect fraud in early stages. It can detect fraudulent activity before a purchase is attempted. This makes it easier and cheaper for companies to prevent losses (Behavioral Analytics for Detecting Fraud).

-Behavioral analysis can detect new fraud schemes. Because it relies on behavior, it detects abnormal behavior, regardless of the attack scheme. This makes it good for new attacks that aren’t yet exposed.

-It doesn’t require new hardware. Behavior analysis works on all smartphones because of the sensors embedded in these devices already. This means users don’t have to buy a token or a wearable technology to authenticate. They don’t have to purchase the newest type of smartphone outfitted with a fingerprint scanner. This means behavioral analysis has the ability to be widely implemented.


Looking to the Future

Behavioral analysis has the potential to be adapted to many different devices, including an entire smartphone’s operating system, not just certain apps that use the technology. This means an entire phone can be protected. Just as you use a case to protect your phone from physical damage, behavioral damage can protect your phone from fraud damage.


The main takeaways about behavioral analysis and fraud:

-Behavioral analysis can gather a lot of data. Gathering more data=better identification and less false positives.

-Reduced friction. Users don’t have to enter a password or authenticate via a static biometric. This means customers can get back to shopping and not abandon checkout in frustration.

Behavioral analysis is a high security, low friction method of fraud prevention. Businesses can integrate it with traditional security measures like passwords to build a system resistant to old and new fraud methods.

While fraudsters have found a way around many security measures, fraudsters can’t possibly mimic every aspect of a user’s behavior. As fraudsters are driven to online and mobile avenues of wreaking havoc, this is a technology capable of adapting. This way, the evolution of fraud prevention can catch up to the speed of fraudsters.




On Wednesday, two officials from the Justice Department and Department of Homeland Security told Congress that the devices are programmed to track cell phone locations — but not gather calls or messages.



History suggests that cookie-based media, and Snapchat in general, may be a fad. In 2013, several viral video companies thrived, thanks to a knack for being able to rank highly in Facebook’s News Feed by using teasing headlines. For a time, it worked; Upworthy, for example, saw traffic hit nearly 90 million unique users. But Facebook changed its News Feed, consumers tired of the click bait, and traffic sank. “Facebook changed and we adapted,” says Upworthy co-CEO Peter Koechley.

Through the bluster is a hint of anxiety, common enough among moguls and artists, that maybe the magic is fleeting. That’s heightened by the knowledge that everything on Snapchat today will be gone tomorrow. Khaled sometimes saves his best posts. “I don’t save them all,” he says. “Just the classic ones.” 


Stingrays, also known as “cell site simulators” or “IMSI catchers,” are invasive cell phone surveillance devices that mimic cell phone towers and send out signals to trick cell phones in the area into transmitting their locations and identifying information. When used to track a suspect’s cell phone, they also gather information about the phones of countless bystanders who happen to be nearby




Human beings, and the civilizations they have created, have always been defined by networks. Looking back over the long rhythms of history, it is possible to observe how each broad epoch of the human saga has been defined by the way its inhabitants connect and communicate. From the economic patterns of production and consumption to the social patterns of everyday life, how we connect has defined who we are.[*]

From the birth of civilization until the middle ages, human beings were dominated by the oral tradition and the constraints of animal-powered communication. Priestly classes controlled what was known and local hierarchies defined and controlled individual ambition.

Gutenberg’s innovation in printing unleashed an explosion of information and communication such as the world had never seen. The spread of knowledge that resulted destabilized the world as Gutenberg’s contemporaries knew it.

The next great network revolution was in the mid-19th Century. The birth of the railroad accelerated communication to a speed that was inconceivable before the perfection of the steam locomotive. The forces of geography that had previously constrained human enterprise succumbed to steam rolling on steel.

Contemporary to the railroad revolution was another equally important and destabilizing innovation that would further extend humanity’s reach. First with the telegraph and, then, later with telephony, instantaneous communication across great distances led not only to the ultimate collapse of distance but also enabled the management of large-scale, far-flung systems. The modern corporation could not exist without it.

The same concept of information as signals was harnessed to deliver sound and video. By reaching Americans on a point-to-multipoint basis broadcasting overcame the one-off inefficiency of previous point-to-point systems. Connecting the nation’s homes, offices and automobiles, over-the-air services created a national platform for shared American experiences.

Over the last several decades, the fourth revolution, digital communication, has both contributed to the size and scale of organizations (including network providers) as well as begun to re-empower small economic units to take on the behemoths. One of the signal achievements of this latest great information revolution – our network revolution – is how the results of its diffused control and increased autonomy produce “innovation without permission.”

It should come as no surprise, therefore, that as the new digital networks of today reshape the legacy of earlier networks, they upend the comfortable consistency into which our society had settled.

It has been suggested that we are living through the greatest network revolution in history. On this the jury is still out. The reverse telescope of history makes prior experiences seem much smaller than they were. Each of the preceding changes enabled by print, transportation and electronic communication were destabilizing and redefining. We should expect nothing less today.

What is clear about our network revolution, however, is that the new information networks are the new economy. Whereas earlier networks enabled the economic activities of their eras, our network revolution defines virtually all aspects of the current economy. In the process, it places even greater importance on the role Congress has given the FCC to protect, “the public interest, convenience, and necessity” of the nation’s networks. We are at a crossroads in the evolution of digital networks. The FCC must play the crucial role of facilitating more dynamic, world-leading change to ensure that the gains of the last several decades are dwarfed by the wonders of the years to come. At the same time, the Commission must also safeguard, nurture and project into the future the enduring civic values that networks have historically embodied.


Three Effects of Our New Networks

History has taught us that the power of the network has never been the network itself, but what those connections enable. It is theeffects of networks that redefine economies and reshape individual lives. Network technology is on a self-imposed path of continual advancement and acceleration. How the public interest deals with those developments is similarly a work in progress. Only when that process plays out will we have the verdict as to whether ours is, in fact, the greatest network revolution.

We can be certain of three effects of our new networks. The first is the end of the tyranny of place. Another is the continual acceleration of the velocity at which the information is utilized and transmitted. The third is a directional reversal from older networks, which because their activity was done at a central point, acted as a centripetal force on those using the networks, to today’s networks that act as a centrifugal force because their network functions are “at the edge.”


Effect #1: The End of the Tyranny of Place

From the earliest days when our ancestors painted on cave walls, the consumption of information had required the user to come to it.[†] Until the 15th century hard copy information was a rarity, controlled by the priestly and the powerful. When Johann Gutenberg picked the lock that had kept information confined the result was the original Information Revolution. The network of printers that sprung up in 15th and 16th century Europe fed the free-flow of information and ideas that started us on the track to today.


Example of a printing press, circa 1520.[1]


While the printing revolution enabled widespread adoption of the Scientific Method’s use of hypothesis and debate as the core mechanism of intellectual advancement, its information distribution was limited by the reality that consumption of the material still required the user to come to it. Books were more plentiful and less expensive than ever, and their information was portable and persistent; but it was still a unidirectional process leading to a commanding interface point. Bound information may have been portable, but only in pieces. Collections of information remained a commanding presence decreeing the user to come to it.

Such a tyranny of place continued to characterize the flow of information for the next half millennium. During that period multiple new information delivery vehicles were developed, all of which continued to command the user to come to them in order to enjoy the benefits. “Go to” the book was followed by “go to” the telegraph office or the telephone, “go to” the radio or television, and even “go to” a network jack in the wall. While portable devices such as a transistor radio offered the ability to receive pre-selected information, they lacked the ability to command a broad spectrum (pun intended) of information of the users’ choice.

Today’s networks have turned the tyranny of place inside-out. Wireless distribution of digital information to hand-held computing devices represents the first time in history that the user commands the information he or she needs. Mobile information retrieval empowers the user to order the delivery of whatever information he or she wants to the place where it may be most productively consumed.


Ending the tyranny of place: Leland Stanford drove in the final golden spike on May 10, 1869 to join the rails of the United States’ first Transcontinental Railroad.[2]


Effect #2: Continually Increasing Speed at which Information is Transmitted and Utilized

Accompanying this reversal is the speed of the new networks. Until the 19th century the pace of life, including the speed of information, centered on the speed of man and beast. The speed and stamina of animal muscle meant that geographic distances controlled the human experience.

By overcoming the limitations of muscle power, the steam railroad crashed through pre-existing limits on human activity with ever-increasing speed and never-ending stamina. The railroad was the first high-speed network. By compressing the geographic distances that had previously isolated economic activity the railroad enabled the replacement of sub-scale production organized around the location of raw materials with the scope and scale economies of mass production. After the components of production had been inexpensively transported to a common site for fabrication at scale, they could then be redistributed to a set of newly connected markets. Whether a city was on the network was critical to this inflow/outflow and thus critical to its economic success.


A painting of the August 28, 1830 race between a horse-drawn railroad car and Tom Thumb, the locomotive. The horse won this race, but the locomotive proved its viability.[3]


Effect #3: Decentralization of Economic and Creative Activity

The modern map of our cities is a network effect reflecting the aggregation of masses of workers at network-created common points in order to mass produce products for a mass market. The effect of today’s network is to move in the opposite direction. Whereas the networks of history centralized economic activity, the new networks push such activity outward to enable small-scale yet interconnected and economically-efficient activity on a geographically dispersed basis.[‡] The networks of the 19th century destroyed individual artisans in favor of industrial production; the new networks are creating a new generation of digital artisans.

And that interconnection moves at the speed of light. When Samuel F. B. Morse tapped out “What hath God wrought” he began the third great network revolution: the separation of information from the physical delivery of its “package.” The early United States created an impressive postal service but the information in a letter could travel no faster than the letter could be carried, by foot, or boat or horse. The railroad may have been the first high-speed network, but information still traveled in the physical package of a book, letter, or newspaper.


This painting depicts an ambitious (and ultimately unsuccessful) 1865-1867 project by Western Union to lay telegraph lines between San Francisco and Moscow.[4]


By separating information from its physical manifestation the telegraph not only removed transport time from the information equation, but it also established the concept of information as electronic signals, thus starting down the path that led to the telephone, radio and television. Ultimately, the off-and-on dots and dashes by which the telegraph conveyed information echo in the off-and-on zeroes and ones of today’s binary digital code.

Our revolution is based upon abstracting information into impulses, a concept that began with the telegraph.[§] Important new networks took advantage of this third network revolution, indeed, in a real sense, the FCC was created originally to oversee the third generation networks of telephony, broadcast, and cable, until the Telecommunications Act of 1996 presciently looked forward to the next network revolution.

In other words, the printing press, railroad, and telegraph were the seminal technology-driven network revolutions of history. They established the groundwork that led to today’s fourth network revolution of computing devices that communicate at high-speed across a diverse collection of interconnected networks. The earlier networks also established the status quo that the new networks are now disassembling.


What History Teaches Us about Networks

John Gardner once observed, “History doesn’t look like history when you’re living through it.”[5] We know how the earlier networks changed the world. We are presently living a new network revolution that promises a similar impact on the history we leave behind.

For almost four decades I have been lucky enough to be enmeshed in the evolving interface between new networks and society. From the early days of cable television, to the digital revolution, and then cutting the cord to go wireless, I have been privileged to have a ringside seat as new networks redefined old ways. Now, as Chairman of the FCC, my colleagues and I have the responsibility of being the public’s representative to the ongoing network revolution. To us falls the job President Franklin Roosevelt described as being, “a Tribune of the people, putting its engineering, accounting, and legal resources into the breach for the purpose of getting the facts and doing justice to both consumers and investors….”[6]

The acceleration of information delivery, the end of the tyranny of place, and the dispersal of economic activity is a troika of network-driven change that rattles the foundation of our commerce and culture. Nevertheless, you can put me down as an optimist when it comes to the effect the current revolution will have on the commonweal.

But I am an optimist without illusions. History teaches that while new networks create great opportunities, it is only through torment and tumult that these opportunities become manifest. The economic dislocation, ideological confrontation and uncertainty that dog us today repeat similar experiences during previous periods of network change.

The printing press helped end the Dark Ages, sparked the Reformation, and spread the Renaissance. Today we look at the Renaissance as a golden era of intellectual and social advancement. To those living through it, however, it must have seemed far from golden. The dissolution of thousands of years of tradition and perceived truth that resulted from the printed free flow of ideas produced fear, uncertainty and conflict. In one 16th century cleric’s warning we hear the echoes of some of the gnashing worries rais ed about the changes being imposed today. “We must root out printing, or printing will root us out,” the Vicar of Croydon thundered.[7]

The railroad fed the Industrial Revolution that pulled people from independent, self-sufficient agrarian lifestyles into a melting pot of workers harnessed to power mass production. Spewing soot and sparks as it cut through previously pristine fields and pastures, pulling the younger generation from their ancestral roots, the steam locomotive recast the patterns of centuries. Again, the changes were not always welcome. “We do not ride on the railroad,” Henry David Thoreau complained, “it rides on us.”[8]

Alongside the railroad’s rights-of-way were strung the wires of the telegraph. Whereas the railroad compressed distance, the telegraph condensed time. The factor of time, which had always been a buffer to dull the sharp impact of change, became a casualty of the electronic network. The institutions of society, built around the immutable fact that because information moved physically it moved slowly, were hit by a seismic shift to speed.

Information speeding faster than the wind meant the heretofore imponderable of weather could now be forecast. News delivered from afar at lightning speed changed the political process, forcing nations out of regional isolation and in to an interconnected whole. Electronic messages coordinated production activities, created a new managerial class, and enabled the rise of market-controlling corporations. And, as with earlier network innovations, there were dire predictions as to the result. Medical experts of the period warned that the “whirl of the railways and the pelting of telegrams” caused mental illness by placing an unnatural burden on the human body.[9]

Understanding the historical reactions to network change, we should not be surprised by a contemporary headline in USA Today, “Tech Tyranny Provokes Revolt.” In apocryphal tones the article reported, “Technology was supposed to free us and make our lives easier, but it’s done the opposite. It’s creating havoc in our lives. Everyone is overwhelmed and stressed out.”[10]

Opportunities Provided by New Networks

It is a false assumption that the changes imposed by our new networks should be any less tumultuous than their predecessors. At the same time, however, the new networks provide an opportunity to improve on the legacies left by those earlier networks.

Our health care system, for instance, began as the railroad brought masses of workers to a central point for mass production. Public health services such as adequate supplies of safe drinking water and institutionalized sanitation services had not been a priority in smaller towns but became a big-city necessity. The small town doc (if one was available) was a medical artisan and jack of all trades who dealt with everything from a broken foot to a cracked cranium on a sub-scale one-off basis. The tide of urbanization, however, suddenly brought scale to sickness. The solution was to apply the principles that had worked for mass production. Public hospitals became factories for the sick where centralized services permitted specialty practices to be applied at scale.

Today, health care has never been better – or more costly. As medical success permits people to live longer it also expands the opportunities for health problems. And the most expensive way of treating those problems is in the hospital. The new networks create the opportunity to transform medical treatment from an ex post experience dealing with a presented problem, to an ex anteexperience that anticipates the problem and prevents or mitigates it – all at significantly lower cost. They offer, in other words, a new combination: the bigness of scale economics with the personalization of the individual design. The power of mass production meets the individual artisan.

Sixty percent of heart failure patients, for instance, are readmitted to the hospital within six to nine months of their initial discharge.[11] The factory approach to medicine prescribes that we wait for an occurrence of the problem, then institutionalize the patients (at great cost) until they are well enough to be discharged. Because of the connectivity of our new networks, it is now possible to get in front of the problem.

My doctor is fond of talking about how medicine is an observational activity, about how the onset of medical problems can be predicted by the observation of statistically significant data inputs. Because our new networks are all about the collection and use of data inputs, they can be married with the informational nature of medicine to change the health care paradigm (even as we safeguard patient privacy).

Rather than waiting for a reoccurrence of new problems to re-hospitalize a cardiac patient, a wearable wireless device can track key indicators, constantly reporting the situation to a medical professional, to predict and preempt problems. I was recently in a meeting of about a dozen people where, unbeknownst to each other, two were wearing such devices. One person’s connected with her mobile phone, while the other’s with a wristwatch that then connected to the network. These individuals were able to go about their daily affairs, carrying with them the opportunity of earlier detection that allows for earlier treatment, better outcomes, and lower costs.


Examples of wearable tech used to track health factors or to communicate.[12] | [13]


The mid-19th century factory-like approach also shaped the manner in which we educate our young. Production line techniques were applied to learning. Education became a process of inputting the raw material, moving it through various processes, until 12 years later it becomes a finished product. The pedagogy of such mass production became a lecture followed by isolated individual homework in which the student tried to apply the concepts of the lecture.

The new networks allow for the old pedagogical approach to be stood on its head. The traditional model used the teacher’s time to uniformly broadcast a uniform lesson to a decidedly non-uniform audience; then the student would struggle alone to apply the lesson to homework. The new networks enable another approach; the student watches the common lecture on a connected device alone and at his/her own pace. They can stop as needed to repeat something that wasn’t clearly understood. Then the student comes to class where the teacher can personalize instruction based upon the student’s comprehension of the lesson and where the irreplaceable stimulation of collegial discussion can be hosted.

New networks, of course, allow this new education paradigm to operate by delivering lessons to the students’ connected device wherever the student may be and at whatever pace may be appropriate. The process also allows teachers to monitor the students’ activity so as to be able to intervene as necessary. Studies by Carnegie Mellon University’s Open Learning Initiative have shown that such programs blending online learning with in-person instruction can dramatically reduce the time required to learn a subject while greatly increasing course completion rates.[14]

The new networks also allow for a richer in-school experience. The ability of a student in class to review a lesson on his or her tablet, or bring up a video demonstration of a topic being discussed is changing the classroom we knew. My colleague, FCC Commissioner Rosenworcel, tells the story of a school she visited in Florida where, “Students have fully traded in chalkboards and textbooks for video screens and laptops…a program that blends online learning with in-person instruction.”[15]

This new educational opportunity, of course, depends on access to the new networks’ capabilities. If a student cannot get access to the Internet at home the new model falls apart. When a newspaper headlines, “The Web-Deprived Study at McDonald’s” because they can’t afford the Internet at home and the public library is closed, but the burger joint has free Web access, something is wrong.[16]

Similarly, if students do not have access to a high-speed Internet connection at school, their learning experience is further constrained. It should be a concern to all of us that a survey of public school teachers and administrators found that 80 percent of schools participating in the FCC’s eRate program reported bandwidth below the level necessary to meet their educational needs.[17]

Health care and education are but two examples of how our new networks can be put to work to solve the legacy issues of the previous networks. The challenge of energy creation and consumption along with the accompanying environmental impacts, for instance, can also be confronted with the application of data network functionality. Using telecommunications networks to increase efficiency of the power network can “build” virtual power plants that create energy through network-controlled demand management efficiencies.

Economically, networks have always been growth engines. Our new networks are no exception. Sixty-two percent of American workers rely on the Internet to perform their jobs. [18] For most of the last decade I have been engaged in the development of new businesses with one thing in common – the harnessing of the Internet. In the process I have watched an amazing transformation take place.

In the world in which I grew up innovation and the job creation that resulted was the province of corporate development centers such as Bell Laboratories. Today the former headquarters of Bell Labs stands deserted. The innovations it pioneered have enabled the work it accomplished to be decentralized across the landscape, creating jobs, investment, and innovations on a distributed template. Never has it been easier and less expensive to develop technologically-based innovations than by exploiting high-speed connectivity and network-based cloud computing.

The opportunities presented by the new networks to attack challenges left behind by previous network revolutions are almost limitless. Our opportunity is to focus not only on the building of networks, but also on how those networks will be applied to meet our national challenges.


Resistance to Network Change

As we go about this task, the lessons of history are again informative. One such lesson is the blow-back that confronts the opportunities presented by network change. The economic incumbents threatened by the change often opposed its innovations. The other lesson is that insurgents eventually become incumbents and behave accordingly.

The printing revolution’s introduction of open inquiry was a threat to the Establishment of the time. Governments and the Catholic Church both tried to shut down or curtail the new technology that was upsetting the established order. Pre-printing authorization and censorship were imposed. But the revolution continued. Yet even two centuries after Gutenberg’s great breakthrough, the Establishment was still fighting back. Books, it was warned, “will make the following centuries fall into a state as barbarous as that of the centuries that followed the fall of the Roman Empire.”[19]

The iron horse’s ability to span great distances at high speed threatened the livelihood of those whose business was based on slower realities. As one historian noted, “Every ploy known to shrewd local lawyers was used to keep things nice and cozy for local carting companies, freight forwarders, hack drivers, hotel and restaurant owners, local wholesale merchants, and anyone else” for whom the railroad represented a change from the status quo.[20] When legal means failed, vigilantes tore up at night the track that had been laid during the day. Legislatures passed laws restricting the ability of the new network to compete with the old.

Hanging on my office wall at the FCC is an 1839 poster printed by those opposed to the interconnection of two rail lines. The sign says nothing about its sponsors or their desire to protect their businesses of hauling people and freight between the disconnected lines or selling food and sundries to those in transit. Instead, the connection was portrayed as a dire threat to public safety – especially women and children. “MOTHERS LOOK OUT FOR YOUR CHILDREN” the poster blares accompanied by an image of ladies scurrying to safety to avoid a rampaging engine.[21]


1839 poster opposing interconnection of Philadelphia rail lines. [22]


From Insurgent to Incumbent

The history of the railroad network also illustrates what happens when the insurgent becomes the market-dominating incumbent. Because small agricultural communities rarely had more than one rail line, for instance, that company was able to extract what economists call “monopoly rents.” Rates were higher than would have been charged in a competitive market. The rates charged small town farmers to move their produce a short distance to a trading center, for instance, were often twice the rate charged for a longer run on a competitive line.

In 1887 pressure from these farmers resulted in the creation of the Interstate Commerce Commission (ICC). The mandate of the ICC was to apply offsetting government power against the power of the railroads so as to assure the protection of the network’s users. It was the first independent Federal regulatory agency and the template for all that was to follow.[23]

The evolution from insurgent to incumbent was also the path followed by the telephone network. As the Bell Telephone Company tried to build upon the technology developed by its namesake, the mighty Western Union Telegraph Company, which had also gone into the telephone business, exploited its market position to block Bell. By the end of 1878 Western Union had almost six times the number of telephone subscribers as did Bell.


This 1891 map sketches out the initial lines making up AT&T’s early network. The cover of the map states the following: “500 miles and return in 5 minutes. The mail is quick; telegraph is quicker; but Long Distance Telephone is Instantaneous and you don’t have to wait for an answer.”[24]


At the Bell battlements fighting the ever-expanding telegraph/telephone colossus stood Theodore Vail. Railing against the larger company’s market power, Vail was the classic insurgent. Eventually, and amazingly, however, Jonah swallowed the whale by buying Western Union’s telephone assets. When Vail’s market position changed, so did his approach.[25]

As president of AT&T, Vail imposed policies he had previously fought. “Two exchange systems in the same community…cannot be conceived of as a permancy,” he wrote in the 1907 annual report. “Duplication of charges is a waste to the user.” It was the concept of a “natural monopoly,” that for such a capital-intensive business the only efficient solution was a single provider. To further this vision Vail began to buy up independent telephone companies across the United States. He leveraged AT&T’s market power to assist his expansion. If a company resisted selling it would suddenly discover difficulty interconnecting with AT&T’s long distance lines.

In a 1913 agreement with the Federal government, Theodore Vail codified the natural monopoly concept in return for, among other things, a requirement that other telephone companies must be allowed to interconnect with AT&T’s long distance network. It was the beginning of the regulated monopoly that would go on to define telecommunications service for most of the 20th century.


The Evolving Regulatory Model for Networks

The network revolution through which we are living has produced a marketplace far different from that which we knew in the 20thcentury. As we live the history of these changes we are also living the evolution of the regulatory model that developed around the realities of the 20th century. There are some who suggest that the new technology should free the new networks from regulation. While the elimination of circuit-switched monopoly markets certainly obviates the need for the old monopoly-based regulation of that technology, one can also argue that the new networks are even more important to society than were the old ones and that the public has the right to be represented in the change equation.

How we deal with these issues has never been more important because of one other network effect. The importance of the basic network has always come from how it enables other networks to exist. The railroad, for instance, enabled networks for the delivery of parcels and mail order retail as well as the refrigerated delivery of food that substantially reduced prices and put meat on tables. The telegraph enabled the establishment of news networks and financial networks. These were substantial network effects; but today’s networks are even more critical in their effects.

Information delivery networks are the new economy. Our networks have never been more integral to our well-being. The industrial economy has been replaced by the information economy that is predicated on the operation of information networks. Economic growth, attacking the legacy problems of the old networks, and building on the new opportunities of high-speed data are all dependent upon the core telecommunications networks. From health care, to education, to the new apps on your mobile device, the growth networks of our economy rely on the performance of core information networks.


A visualization of the early 2005 internet. Note that green lines represent .com and .org and blue lines represent .net, .ca, and .us.[26]


Three Pillars to Communications Policy

The result of such network reliance is what makes the work of the FCC so interesting and important. Like the rest of society, the Commission must deal with legacy issues as well as plan for the future. The FCC’s role must be inextricably tied to the dual responsibilities of facilitating the dynamic technological change that will persevere long after we are gone, and to protecting and extending forward in time the enduring civic values that successful networks have historically embodied.

There are three pillars to communications policy that guide that process: promoting growth, preserving the fundamental arrangements that I call the “Network Compact,” and safeguarding the broader values historically associated with communications policy.


Pillar #1: Promoting Economic Growth and National Leadership

The first is that policy should promote economic growth and national leadership. A seminal legacy issue is that our current networks each grew up in different environments. Those different histories affect how these networks plug into the new future, including the role of government. The wired networks – whether telephone companies or cable companies – grew out of 20th century monopolies. Typically, there was only one telephone company and one cable company in a town. Contrast that with how from the start there have been multiple wireless networks. The Communications Act’s policy goal for all networks is to ensure reasonably priced, world-class services for all Americans. Because competitive markets are more nimble than the regulatory process, the goal should be to ask how competition can best serve the public – and what, if any, action (including governmental action) is needed to preserve the future of network competition in wired networks or wireless networks.

In a competitive market the speed, price, capacity, quality and choice of network services should show constant improvement. Policies that encourage new investment, competitive offerings and protect markets from unwarranted consolidation also increase the “home field advantage” for American companies. One of America’s historical advantages in the world economy has been a large internal market. Such an advantage was the bedrock of American leadership in the industrial era and must not be lost in the information era. The quality and scale of our country’s telecom networks gave us the pole position in the information economy. Maintaining that competitive advantage is a national priority. The Internet began in the U.S. because government encouraged it and our networks permitted it; squandering that advantage would be a national calamity.


Pillar #2: Guaranteeing the Network Compact

Beyond such structural issues is the basic relationship between networks and those they serve. The technology that drives the new networks may have changed their design and operation, but the essential components of the relationship between the network and its users has not changed.

The second policy pillar, therefore, is the Network Compact between those who provide the pathways and those who use them. This civil bond between networks and users has always had three components: access, interconnection, and the encouragement and enablement of the public-purpose benefits of our networks (including public safety and national security).

The inability to access a network is like the proverbial tree falling in the forest. If you can’t access networks, they might as well not exist. There are several manifestations of network access, all of which are topics of the Communications Act. One component of access is universal service. If high-speed Internet connections have not been built to an area or are denied to individuals because of either their individual economic realities or the practices of the provider, then access has been effectively denied. For example, rural communities that don’t have access to our new networks cannot fully participate in our economy and our culture. Similarly, if someone has a desire to use the network but is thwarted by unreasonable network practices, then access has been denied. And if when using a network basic consumer rights and the rights of people with disabilities are violated, then the right of access has also been violated.

Interconnection is, of course, tied linguistically to the “Internet”, which is short for the original name “internetworking.” The Internet is the stitching together of often disparate networks through the use of a common protocol (TCP/IP). The Internet is not a network, per se, but a collection of networks harnessed to a common purpose. As such, the value of the Internet has always been its “Inter” – as in the interconnection and interoperability of these disparate networks. As a collection of networks over which information packets travel in seemingly random paths, the Internet is not like the telephone network’s dedicated circuits. Twentieth century tel ephone regulation focused on dealing with the effects of the switched circuit monopoly. Telecommunications oversight today should focus on encouraging and protecting the unique capabilities of the components of the Internet. The telephone network created an identifiable, singular, end-to-end path. The Internet is far different; it is a collection, not a thing. As such the interconnection of the parts of the collective we call the Internet is a sine qua non.

We must be clear. “Regulating the Internet” is a non-starter. What the Internet doesis an activity in which policy makers should not be involved (other than assuring overriding purposes such as the ability to complete 911 calls or the ban of child pornography). Regulating Internet access is a different matter. Assuring the Internet exists as a collection of open, interconnected facilities is an appropriate activity for the People’s representatives.

The final component of the Network Compact is the responsibility to protect national security and public safety. The packet-switching technology of the Internet was developed to enhance U.S. national security by making it difficult for an enemy to destroy the ability of the United States to order a retaliatory strike.[**] Today, the technology designed to enhance national security has become the pathway by which those who would do us ill – ranging from criminal gangs to state actors – can access the very essence of our economic, individual, and military well-being. Our networks must be secure. At the same time, our networks must continue to be the safety backbone during individual or mass emergencies. The ability to summon emergency help, to coordinate emergency response, and to do so via a network that is as secure as possible from cyber-attacks must be unquestionable.


Pillar #3: Enabling the Public Purpose Benefits of Networks

The third policy pillar is the encouragement and enablement of the public purpose benefits of our networks. Broadband for the sake of broadband is an empty goal. As we have seen, the importance of networks is not the technology itself, but what the technology enables. Out of the first two policy pillars comes an accessible, usable network. The third pillar’s purpose is to apply that for the delivery of public benefits. Included in such a list of public benefits must be the provision of the tools necessary for a 21st century education, access to the benefits of the new networks by individuals with disabilities, and the maintenance of diversity, localism, and free speech. As history has taught, the importance of networks is not what they are, but what they enable.

The Communications Act is quite specific that the role of the FCC is to protect “the public interest, convenience and necessity.” This mandate traces to the creation of the Federal Radio Commission in 1927, was incorporated into the Communications Act of 1934, and was reaffirmed by Congress in 1996. For almost 90 years this instruction has been the alpha and omega of the government’s responsibility and authority. As technologies have changed and markets have evolved it has remained inviolate. The challenge of the FCC today is the delivery on this mandate – which continues in law and must continue in practice – in a manner that supports this third pillar, enabling the public purpose benefits of our networks.


Eugene Octave Sykes (seated bottom left) served as the first Chairman of the FCC from 1934-1935.[27]


The Relationship Between Competition and Regulation

Let’s begin with the fact that a dynamic market deserves dynamic decision making. It took the FCC 15 years after its 1968 decision to dust off a 10-year old petition for mobile telephony spectrum before the implementation of the first cellular service in Chicago. During that time Americans watched as countries around the world rolled out mobile services to their citizens. Not only was the process slow, but also the Commission had a “we know best” attitude that was exemplified by Commissioner (later Chairman) Robert E. Lee’s warning that people calling each other on cellphones was “frivolously using spectrum.”[28]

However slow and debilitating the FCC’s decision making on cellular service was, it was not just the harbinger of the untethered era, but also of the competitive era. When it finally got around to allocating spectrum, the government got it right; there would be two competitors in each geographic area – and more over time. In an era when there was The Telephone Company, the FCC broke with precedent and created a competitive wireless marketplace. Thirty years later, the regulatory mission of the FCC continues to be informed by those watershed precedents: delay is the enemy of innovation, and competition is the lifeblood (or sine qua non) of growth and innovation.

These lessons manifest themselves in a few regulatory concepts.

There should be an inverse relationship between competition and government action. The more there is of the former, the less there need be of the latter. The old monopoly model began with the assumption that telecommunications was a “natural monopoly” sanctioned by the government and overseen in great detail by that government. When there is effective competition there is less need for the government to substitute for it.

Viable competition among networks is essential, and the networks must remain competitive. Competition must be encouraged, facilitated, and where present protected. The response to those who complain about “regulatory burden” is the embrace of effective competition.

Yet, workable competition sometimes is not attainable, and even where it is theoretically possible, it is not the most natural of economic acts in the marketplace. Capital markets and investors worry that higher profits do not always come from the price discipline of rigorous competition. Appropriately celebrated for its benefits, economic forces nonetheless naturally connive to limit competition. Over the years I have repeatedly heard business leaders comment, “We welcome competition.” They are sincere in these statements, but there is a difference between celebrating the concept of competition and aggressively seeking its implementation. The real world business environment inherently attracts anti-competitive antibodies seeking to immunize markets from its effects.

The role of the FCC is to both protect and stimulate competition in order to provide consumers access to world class networks on reasonable terms. If the goal of the providers of telecommunications services is to avoid regulation, then the path to that end is clear: effective competition in the present and an effective path to competition in the future. Where markets fail or are threatened, the FCC has the responsibility to provide redress.


The Role of Multi-Stakeholder Processes

In a world of fast-paced technological innovation there is also a legitimate reason to investigate whether the process that facilitates such rapid innovation can be applied to the process of government. One of these models is the use of multi-stakeholder coordination that brings together all the affected players for the purpose of developing a common solution and then enforces its implementation. Another is a simple precept: successful businesses learn continuously; so must government. The multi-stakeholder approach with its ethic of inclusivity has many attractive features and potential benefits, but they must be produced efficiently – that is, quickly – in the context of the rapid change in which we find ourselves.

Where the multi-stakeholder model exists, speed, learning, and flexibility should be rewarded where it serves the “public interest, convenience and necessity,” as a valuable addition to the Commission’s tools. The FCC can identify specific goals and work with the stakeholders to develop a meaningful process. This should not be confused, however, with a “Regulation Free Zone.” The FCC can identify the issues that lend themselves to multi-stakeholder solutions within the agency’s responsibility, provide the convening function, and coordinate the process. Most importantly, however, the result must be more than words on paper. “If you will, we won’t” is a good regulatory philosophy for the era of fast-paced technological innovation. However, it only works when accompanied by serious oversight and an iron-clad corollary: “But if you don’t, we shall.”

The FCC recently had an experience with just how voluntary standards can be ignored. The Network Compact of providing for the public’s safety was violated when the self-regulation associated with the provision of backup 911 services was ignored by some companies, only to be exposed during an emergency. A wise man once taught me to “inspect what you expect.” The regulatory agency should encourage multi-stakeholder solutions to network responsibilities which include strong oversight to assure the delivery on the promises and a rapid regulatory response if the promises are not fulfilled.



The Need for Expeditious, Fact-Based Policymaking

A similar demand for dispatch should apply to the agency’s regulatory activities. The regulatory processes of the FCC have been criticized by some as being too opaque and cumbersome. At the same time, however, this is the agency that moved expeditiously after being given spectrum auction authority in 1993 and with similar dispatch to meet all the deadlines in the implementation of the 1996 Telecommunications Act. Investigating how the agency can operate quickly and smoothly under the procedural requirements of the Administrative Procedure Act (APA) should be a priority.

One key component of the FCC’s administrative process is to focus like a laser on a fact-based, data-driven process. The goal of the agency’s rulemakings should be to begin with a rebuttable presumption and invite submission for the record of data that either supports or refutes the proposition. It is a simple, yet powerful concept that should be the FCC’s North Star; facts evidenced by supporting data.


* * *


The network revolutions of history have led us to our moment of history, a hinge moment when the definitive activity of how we connect is being redesigned. In the coming months we will lay out in greater granularity how this history and these principles apply to the responsibilities of the FCC. The history of other such moments when networks redefined the human experience teaches us that while such periods are full of tumult, they are even more full of opportunity. To our generation has been passed the privilege of participating in an historic moment. To those of us charged with being the public’s representative to the revolution falls the responsibility to maintain incentives to expand and garner the value of the present and future electronic networks while protecting the enduring values networks provide the people they serve.

its not machines…. its a spiritual war… global…. “who owns us?”?

any society about using tech agianst “others” is not loving… therfore not of God?

Psalm 121

A song of ascents.

I lift up my eyes to the mountains—
    where does my help come from?
My help comes from the Lord,
    the Maker of heaven and earth.

He will not let your foot slip—
    he who watches over you will not slumber;
indeed, he who watches over Israel
    will neither slumber nor sleep.

The Lord watches over you—
    the Lord is your shade at your right hand;
the sun will not harm you by day,
    nor the moon by night.

The Lord will keep you from all harm—
    he will watch over your life;
the Lord will watch over your coming and going
    both now and forevermore.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: