Fatal Filtering: Software Deterrents to Expression on the Web

 

 

by

 

 

John E. Bowes

School of Communications

Box 353740

University of Washington

Seattle, WA 98195-3740

 

jbowes@u.washington.edu

 

 

Draft Copy

 

 

 

Prepared for the Communication Technology and Policy Division

AEJMC Midyear Convention, Denver, CO

25-26 February 2000.

 

 

 


I

n the several years since the US Supreme Court denial of the Communications Decency Act of 1996,[1] public and governmental attention has shifted to imposing Internet content controls at the consumer side of transmission – in preference to the First Amendment problems arising from controls upon content producers.   Since the refutation of this law and its successor, the Children’s On-Line Protection Act,[2] other changes in industry economics and policy have raised the stakes to find agile, constitutionally defensible ways of protecting the public from unseemly Internet sites. Most efforts have settled on a new class of software, filter agents, which interdict “on the fly” unwanted content and keep it from being viewed.

 

Broadly, the problem examined in this paper is the accuracy of these agents in (a) filtering only objectionable content; (b) having clear policy statements about the action of filters – how they work and the list of blocking terms or websites used; and (c) imposing editorial and citizen review to mitigate “blind” censoring by automatic filtering based on key words.  The net impact of inaccuracy in these programs is to exclude from public view certain minority groups (gays and lesbians), charitable or informational groups (breast/testicular cancer assistance), and free web site customers (anyone unable to afford commercial, institutional or their own web servers). Further, filter agents may serve to restrict access to comment on sensitive issues and groups, regardless of source. Opinions on pornography, pedophilia, AIDS, safe sex, birth control, Nazis, anarchy, prayer in schools – to name a few – have been banned by clumsy association with sites offering obscenity, hate, sexually explicit materials and extremist religious or political views. 

 

In reaction, a number of groups, ranging from the American Civil Liberties Union to the American Library Association and the Gays and Lesbians Allied Against Defamation have mounted a vigorous campaign to fight mandatory filtration use in libraries and schools, and to force manufacturers of this software to improve accuracy and flexibility of their products. With equal vigor, social conservatives and their organizations like the Family Research Council have fought back, advocating not only home use of filter agents, but its mandatory use in libraries, schools and “upstream” in the delivery servers of internet service providers (ISPs).  To both poles in this debate, the stakes are high: the access to minority opinion and sensitive topics by one side and protection of children and social order by the other.

 

This study examines the approximately 50 filter agents presently available on the US market for the clarity of their mechanisms to consumers, the care exercised in making revisions (as when wrongfully blocked sites are made known), and the visibility of filtering to the end-user who should know that filtering or monitoring is in place. Based on these findings, we make recommendations for improving the evaluation of filters and features that minimize blocking of socially useful minority content.  We examine technical information and public product descriptions for clues to its operation – the sort of action careful consumers should take before purchasing this software or supporting its use in community institutions. Later, we discuss the methodologies needed for systematic, valid comparison of agents in actual field-test situations. 

 

The Recent Evolution of Filter Agents:  In their 1998 report, Censorship in a Box,[3] The ACLU suggests that use of filtering software has experienced “explosive growth” since the defeat of the Communication Decency Act of 1996 in Reno v. ACLU. The report notes that in 1997, some $14 million in filtering software was sold with projections that “blocking” software products would grow to $75 million in 3 years – by the year 2000. The American Library Association reports an Internet access rate of 60% by 1999 in US libraries, up from 28% in 1996.  The universal access mandates of the Telecommunications Act of 1996 help assure that this figure will climb to near totality over the next few years. The Constitutional problems apparent in “supply side” control of Internet content as seen with the Children’s Online Protection Act (COPA) leave filtering as the main “protective” product available to schools, libraries and parents. Faced with escalating statistics on “cyberslacking”[4] – workplace misuse of corporate Internet connections – businesses have joined as profitable partners to worried parents.

 

Growth and Use:  Paralleling the growth of filtering products is its increasing availability on the servers of Internet access providers.[5] This “upstream” filtering requires no user implementation; offending websites never appear past the filter placed on the network server. One filter program, CyberPatrol, has served America Online, Compuserve, Prodigy, AT&T, Bell-AtlanticNet, and Scholastic Net, among others – over 24 million subscribers.  A half decade ago, the variety of Internet providers might have offered more choices from no to heavy censoring. They were local and relatively small, reflecting regional norms.  But since the mid 1990s, there has been a rapid consolidation of ISPs. Local “mom & pop” providers are increasingly bought-up or competitively flanked by national scale organizations.[6]  Some 160 Internet service providers were consolidated in 1998 with another 70 meeting a similar fate in the first three months of 1999.[7] America Online has attained the scale of a large international telecommunications organization with over 19 million subscribers. Portal page providers like Yahoo, have become billion dollar organizations with customers across a half dozen nations.

 

As these newly large corporations face anxious politicians and a tangle of differing local and national criteria for acceptable content, they veer towards the safest and most conservative standards. They have a lot to lose by protracted legal problems with government and policy problems with frightened, angry parents. In one extreme case, a complaint of Internet pornography transmission laid by the Bavarian state prosecutor in 1995, Compuserve was obliged to disconnected customers across Europe from over 200 Usenet news groups. Felix Somm, a Compuserve manager, was sentenced to a heavy fine and a two-year suspended prison term because he had "abused" the Internet and “allowed” child pornography and Nazi literature – both of which are illegal in Germany – to be available to German CompuServe users.[8]  It made little sense to the court that Somm had no real ability to monitor or control the flood of international items coursing through the Internet.

 

It is cost-efficient for AOL, Compuserve and others to hand over “vigilance” by buying a popular commercial filter and giving their subscribers the choice to turn it on. The problem is that software selection may be geared to minimal price, low supervisory overhead and a common denominator bland inoffensiveness.  In short, as size, consolidation and corporate standardization of software take place, filtering has grown in impact. In contrast to the landscape of thousands of small Internet providers several years ago, each with a particular local outlook and varied policies, consolidation narrows the choices.

 

Far from an abstract problem, these trends could mean a progressive denial of access to socially useful information considered “controversial” or pressing the margins of “risk control” for litigation-shy Internet providers.  Over-cautious filter software rarely excites hostility from mainstream consumers and wary government officials.  The losses of useful content are mainly unseen unless specifically tested for (see below), while “bad” sites slipping through filter agents can excite considerable protest. Corporate risk management is likely to win over more abstract notions of “fairness” to marginal interests whose omission can be difficult to detect.

 

Filtering Legislation as a First Amendment Work-around: Since the failure of the Communications Decency Act, there is ample evidence that governments increasingly see filtering as their quickest fix for calming those worried about Internet content. The negative outcomes given outright censorship in ACLU v. Reno and later in a second round with the Child Online Protection Act have increased reliance on filters, most recently seen in The Internet School Filtering Act (S. 1619) of 1999 proposed by Sen. John McCain.  If Federal efforts weren’t enough, the ACLU reports in 1998 that 10 Internet censorship bills were proposed in state legislatures, 5 of them specifically requiring filter or blocking software in schools and libraries.[9]  By recommending filtering software, the main gate-keeping task is transferred from provider and regulator to parents, schools and libraries. More importantly, in the hands of vulnerable local institutions, filters become an easy pressure point for special interests, local and state politicians and any parent at odds with the Internet.

 

Lower courts are beginning to take a critical view of filtering by civic institutions. The recent Mainstream Loudoun v. Loudoun County Library decision showed that filtering all of a library’s machines was equivalent to “removing books from the shelves.”[10]  The decision in late 1998 found a number of problems with indiscriminant use of filtering software: adults were denied access to constitutionally protected material because it was unfit for children; standards were not spelled out, and there was a lack of safeguards or process for judicial review of policy.  Despite this decision, it has not slowed legislative efforts to mandate filtering software. As Karen Schneider, organizer of the pioneering Internet Filter Assessment Project described, over 15% of libraries now filter.[11]

 

Apart from national-level freedom of speech interest groups, international associations responsible for global Internet policy have entered this controversy.  Prominent among these is The Global Internet Liberty Campaign, an international union of human rights organizations favoring unfettered access to electronic information. Their member statement to the Internet Content Summit at Munich this September (1999) is worth repeating:

 

“The creation of an international rating and filtering system for Internet content has been proposed as an alternative to national legislation regulating online speech. Contrary to their original intent, such systems may actually facilitate governmental restrictions on Internet expression. Additionally, rating and filtering schemes may prevent individuals from discussing controversial or unpopular topics, impose burdensome compliance costs on speakers, distort the fundamental cultural diversity of the Internet, enable invisible "upstream" filtering, and eventually create a homogenized Internet dominated by large commercial interests. In order to avoid the undesirable effects of legal and technical solutions that seek to block the free flow of information, alternative educational approaches should be emphasized as less restrictive means of ensuring beneficial uses of the Internet.”[12]

 

It is important to note that fears of filtering extend well beyond exclusion of minorities. With wide, uncritical adoption of filtering, there is fear that Internet will become bland, branded and standardized.

 

Faulty Filters:  A major complaint by critics is that filters have unintended consequences. While concerned minorities have prevailed on software manufacturers to fix glaring problems, a variety of forces can make the best-intentioned efforts go wrong. Karen Schneider describes in grim detail just how bad this software could be for gay and lesbian-oriented information. As she aptly concludes, “ . . . It isn’t realistic to expect that a $40 piece of software could effectively manage something as huge as the Internet – and it’s equally unrealistic to expect a piece of software to instantly customize itself to the values of each person who uses it.”[13] The evidence of her conclusions abound.

 

In Passing Porn, Banning the Bible, The Censorware Project tested one popular blocking product, N2H2’s Bess. As the title suggests, this popular filter software had many unintended consequences, despite the firm’s assurances to Congress that human editors reviewed all banned sites.  Web sites advocating celibacy, having information on cats, about ghosts, Serbia, or sites critical of Internet censorship were all caught in Bess’ filters. All free web page sites were banned as a matter of policy. These servers, where individuals may post their own web sites without cost, are among the largest sources of Internet content. On the other hand, many obviously pornographic sites slipped by Bess, such as “stripshowlive.com,” “legal-sex.com” and “digitaldesires.com.”[14]  It should satisfy no one that safe and risqué topics seem abused by filtering in apparently equal measure.

 

These errors clearly fault the idea that software presently available can adroitly filter the massive 20-plus terabytes of data that make-up the Internet database. A 1996 study estimated the Internet at 1.5 terabytes (then) and calculated that better than a third of the web pages changed per month.[15]  One current estimate is that new pages now are added to the web at a rate exceeding 25 pages per second. Filters are swiftly blind-sided by changes in key word usage, semantics and the onrush of new web pages. They demand continual maintenance by editors who should quickly see the categorical differences between, say, Asian wildlife and “hotasianfoxes.com.”

 

Will institutions pay the costs of continual maintenance? And will parents who have installed blocking software on home machines do likewise? Can any software organization maintain massive review and vigilance for the sake of a $40 product? Data can’t be found on these questions, but we suspect for many – individuals and organizations – the task may fall into the class of “to do” computer chores that never quite get done.  It’s just too time-consuming and troublesome. Editors must also anticipate inventive countermeasures by those most likely to be screened out, such as loading sites with tame keywords deliberately to frustrate filters. A site like wwwmen.com, despite the innocuous name, is about gay pornography, not about men’s health or fashion.

 

Most filters and blocking software are proprietary or secret in terms of their precise functioning and so too their keyword lists of censored terms.  One must test filters with benchmark programs to learn, for example, that Searchopolis, the engine underlying Bess, returns the same website list on “testicular cancer” as it does on “cancer.”[16]  Only through experience do we discover that “testicular” is banned, having no influence on the search. While some filter software firms are responsive to citizen criticism and have advisory boards to adjudicate tough categorical issues (such as OK-for-kids gay websites), very few reveal the details of their products’ operations.

 

A growing number of studies have tested filters, but few with anything approaching a comprehensive benchmark characteristic of the vast range of material making-up the Internet. It is estimated that the best commercial search engines rarely cover more than about 20% of the web’s contents.  The analytic overhead of not only representative surveying of web content, but of gauging accurately its use, is the focus of intense commercial competition and debate. In many respects, the magnitude of this task is like that of a national census – expensive to do, open to continual political controversy and likely to be biased, however keenly safeguards are employed.

 

Open Platform and Search Engine Accuracy:  Because software engineering is among the most competitive of enterprises, few inner workings of filters are made public. To find products to examine, we accessed several current “master” lists of products: Anne Bubnic’s list of Parent Control Resources,[17] GetWiseNet’s “Tool’s for Families,[18]” and AT&T’s key sources for parents.[19]  Promotional materials and technical data from the makers’ websites were examined for key characteristics of how the filter agent worked and the extent of human intervention to make sure socially useful sites were not unintentionally excluded. We were especially concerned with several hallmarks of filter accuracy:

1.      Method and Visibility of Control: How a filter agent works is important to its flexibility and subtlety with new content and content that may be about such banned topics as pornography, but not pornography itself.  Proxy servers, for example, commonly pool and route all machines in a library or school to one high capacity Internet connection. These connections can be programmed to block unwanted sites from being available anywhere on networked machines.  In short, the Internet is filtered for all local users by an “upstream” computer or “firewall” that checks each request for web content. Other systems make no demands on local hardware or software. Operating as Internet service providers, H2H2’s Bess and other child-safe sites “clean-up” and select all material to be made available at the “supply” side of the network content stream.

 

Blocking uses key words and web addresses to specifically screen certain topics and sites from view on the client’s computer.  But not all blockers work alike: an important distinction is between blockers that use solely keywords and those whose action is supplemented by actual web addresses. The latter requires editorial oversight to screen individual web pages, but it is considered more accurate, screening out fewer innocent sites.  Some keyword blockers also try to detect “context” so as to distinguish between images of women’s “breasts” and information on “breast cancer.” These may use complex schemes to detect themes and the social utility of the information. Key word blocking, in short, tries to anticipate undesirable web pages, whether old or brand new, and eliminate them on the fly. Address blockers, contrary, require the earlier preparation of either a “not” list of forbidden addresses or an “allow” list of only those sites that may be used. These may run into the thousands of entries.

 

Taking an entirely different approach, stealth monitors do no filtering at all. Instead they take snapshots of the screens each viewer accesses during a period of use (these may be every screen or a sample). Such systems rely on control occurring after inappropriate material has been viewed. They may be used to deter “cyberslacking” by employees or may become the basis of a disciplinary discussion with children who access inappropriate sites. What is or is not acceptable is determined by reviewing the record of each user.  Most such monitors are invisible to users, leaving warnings to those supervising use of the web.

The key point is that hidden means constitute the basis of 60.5% of the software available, counting stealth monitoring together with proxy server and content access controls that can operate without warning messages.  Some 54% of software packages mention “secrecy” or “invisibility” in their product promotions.

 

About 27% (n=13) of products reviewed use several methods of filtering at once. Most commonly, content access barriers – passwords and specialized websites – pre-filter content. Otherwise, stealth monitoring is included for parents and employers who wish to track use by children and employees in a “cleaned-up” environment.

 

Table 1: Showing secondary filtering features by primary filtering method.  Agents using a single method have been combined.

 

 

 

 

 

 

 

 

 

Single
Method

Content Access.

Stealth Monitor

Total

Other Methods

24

0

0

24

 

100.0%

0

0

100.0%

Blocking

4

6

5

15

 

28.6%

40.0%

33.3%

100.0%

Proxy Server

7

1

1

9

 

77.8%

11.1%

11.1%

100.0%

Total

35

7

6

48

 

75.5%

12.2%

12.2%

100.0%

 

There are other more blatant but less used means of filtering the Internet. Hardware devices really are a simply lock and key that prevents the computer’s modem from reaching the telephone line. Ratings attempt to assign numbers to web pages based on their violence, profanity, explicit depictions of sex and other characteristics. But this requires a widely used, agreed-upon definitional scheme and clear rules for judging severity.  Movie, cable television and video games have struggled with this idea for years with little consistent success.

2.      Is the methodology of the filter agent discussed? These descriptions range from a single sentence promising that threatening content will be banished to complex discussions of how keyword filters operate in all their complexity. Most descriptions fall in-between these extremes with very little disclosure of details. These products compete in a busy marketplace and thus try to find a middle ground between overloading parents with technical details and touting their sophistication. Too much detail also gives competitors and critics insights into techniques that may be copied – or hard to defend.

 

In some 62% of filter agent descriptions, how the filter worked received more than single sentence description, sometimes continuing on for several pages.  A few agents did not need description due to their simplicity. Hardware locks simply put one’s modem under lock and key. Some descriptions were so perfunctory or alluded to engines leased from other providers that we classed these descriptions as questionable.

 

We also need to note in passing the dependency of filter agents upon words and text. Images can’t be recognized by automated software available to consumers. How, for example, would $40 software detect images of Greek sculpture apart from modern-day pornography based on form alone? The important point is to reckon just what software does internally with its keywords and the level of human oversight into this process.

 

 

3.      Availability of block word list: Most filtering – either upstream with an Internet provider or operating on a user’s machine – employs a list of keywords to block offensive content. These may be common words like “sex” or “bomb,” slang like “hash” or “pot,” hate words like “nigger” or “faggot,” or technical terms offensive to some parents like “testicular” or “fetus.”[20] Each page summoned from the web is tested against the list for the presence of these words. The problem is that such terms have both innocent and offensive uses, context depending. Filter software differs in the agility of these words to be detected in their offensive contexts, yet allow the innocent use to pass through to the user.  One defense is to allow the keyword or blocking word list to be viewed by purchasers of the filter agent. Few (8.33%) allow this, preferring to keep them as competitive secrets or – recognizing largeness of these lists – not to provide them for reasons of economy.  The difficulty is that individuals must learn proscribed words by testing filters with pages known to contain these terms. Some filter providers (CyberPatrol) offer web test pages where terms can be tried against simulated filtering action. In the figure above, certain filter agents, particularly those offering only monitoring of users do not actually filter anything. They merely report what people have been viewing. As such, they really are not applicable (N/A) to this count.

 

4.      Customizing Agents: A majority of agents allowed parents and others to add blocking terms (words, phrases) and URLs (website addresses) to the blocking list.  In a few instances, factory blocked URLs and terms could be “excepted” from blocking, allowing a more complete customizing control. It is important to note that some packages use an “allow” list where all but permitted terms and sites are rejected in contrast to the more common practice of a “not” list where forbidden terms and sites are listed. Obviously, the first option yields a highly constrained world of possibilities, but with a lower risk apparently of offensive materials slipping by the filter. Filters made for very young children tend to employ this severe method, the idea being that diversity is secondary to protection.

The use of “not’ lists has a claimed 90 to 95% screening success, while “allow” lists claim much higher. The cost is in the innocent and presumably helpful sites that vanish because they are not on an “allow” list. Particularly devastating are changes that take place in both language and websites with time. “Not” lists will catch such shifts much more often those terms not imagined for inclusion on an “allow” system.

 

5.      Regular Factory Updates: Because of rapid social changes, language, semantics, slang and politically incorrect speech can also change swiftly.  In an earlier time, the term “bareback” referred to horse riding without saddles. In today’s slang, it also means unprotected sex between gay males – an implication that is only a year or two old. If parents lack the time to search out changes in offensive content and the new websites supporting it, then the manufacturer may do this as a free or paid service. Some filter agents can be regularly updated automatically with a fresh list on a monthly basis. Usually a small subscription fee is charged. Regular updating vital to the effectiveness of filter agents, given the fast content turnover and the rapidly evolving pop culture terminology pervasive on the web.

 

About half of the filter providers offer updates, either as full revisions of their software or as add-in lists that update the original with the program. Clearly this is a task that could be done by concerned parents and institutions, but its likely few have the persistence and expertise to manually update this kind of utility software month-after-month. Fully automatic software that fetches updates on its own is rare for home users. Only staffed “upstream” filtering by Internet providers afford this care on a continuing basis. The difficulty is that very few providers of upstream filtering said much about how often and by what procedure they keep their filter lists up-to-date. Those providing solely monitoring (indicated as N/A) do not filter and thus have no need of lists or updates.

 

 

 

6.      Review Boards: A few providers of filter agents maintain advisory boards of parents, experts and, occasionally, minority groups who advise on new words and sites to consider for filtering. More importantly, perhaps, they may report unintentional filtering or provide a minority viewpoint on “acceptable content.” Gay and lesbian groups, for example, have been vocal critics of filter agents because socially useful content about this community is often filtered as indecent or at least upsetting to socially conservative families.  In this item, we counted only 12.5% of providers as clearly having some kind of editorial board to review key word and web site addresses used for blocking. A much higher level – 37.5% -- was indeterminate where it was difficult to tell if regular staff simply reviewed lists periodically or that outside consultants were specially deputized to review unfair exclusions and bad key word choices. In normal practice, automatic web crawlers or “spiders” are commonly used to compile blocking lists as an alternative to costly human oversight. With only few firms offering review panels, continuing errors are likely.

 

7.      Multiple Users: Half of the products reviewed could be adjusted for multiple users. This is important in allowing appropriate material to be selectively allowed according to a user’s age or other criteria. It permits adult users to have little filtering and very young children to have higher levels of “safety.” Users each have unique log-in names and passwords with preset levels of filtering. On-line Internet providers, proxy systems and software resident on individual PCs all can have this capability.

 

This review must be tempered by the fact that relatively few of these products have high use. Just one product, CyberPatrol, has (arguably) more than half the sales of filters for individual machines.[21]  Reliable sales data for proxy and upstream filters aren’t available, but these agents are likely to prosper as states and school districts mandate centrally controllable filters for all machines in the community. Some firms have taken extraordinary steps to refine their lists and address complaints of wrongful blocking. Yet, these powerful and often clumsy tools are sold from discount store shelves with little description and incentive for the consumer to question the actions they silently make, once installed. Happily, several of the most popular products make strong efforts to update, review and give thorough explanations of how their products work.

 

Monitoring and Privacy:  Monitoring and use of filter agents to control content is part of a larger national issue of covert surveillance and privacy on the Web.  In data gathered from the Georgetown Internet Privacy Policy Survey (GIPPS), of 7,500 busiest servers on the World Wide Web, ninety-three percent of the sites in this survey collect personal information from consumers; some 66% post at least one disclosure about their information practices.[22]  The Online Privacy Alliance, a coalition of industry groups, found that 99% of sites in their study collect personal information.[23]  A 1999 report by the Forrester Group claims that 90% of commercial websites are deficient in protecting the privacy of their users.[24]  Beyond commercial interests, government has persisted since the Reagan years to increase the surveillance powers of the Federal government though electronic means to counter “threats to the infrastructure” or solve problems in pacing the electronic security counter-measures of organized crime.[25]  Businesses have a variety of means to monitor use. In one recent survey, 74% of businesses queried had some kind of oversight on their web users.[26]  It is perhaps difficult to make a moral point to children that the rights of others – including privacy – need protection when the parent is aggressively using surveillance as well.

 

What then are the hallmarks of good privacy on the Internet generally and for monitoring software in particular? What should consumers be asking from websites, Internet service providers and makers of filtering agents? While it is difficult to rate any one item as more important than others, several points are clear:[27]

 

 

 

 

 

 

On-line privacy in operation is a mix of web-based features and proper notification. Filter agents play an important role in the control of content to and gathering of information from the user. Their future use likely will grow, and with it a need to give consumers good information on what is being left out of their information diet. 

 

Research and Measurement Needs: In this report, we have presented a static view of filtering issues based upon manufacturers data. For individuals and groups threatened by filtering, there is a strong need for tracking data, based on actual performance of the filter. Some early efforts of this sort are notable. The Internet Filter Assessment Project (TIFAP) initiated in 1997 by librarian Karen Schneider used volunteers from around the country to test filter products against test terms and websites.[28] Noted were both failures to filter offensive materials and the propensity for socially beneficial sites to be excluded. Results were qualitatively reported – instances of gross errors and product failures.

 

But given the magnitude of the Internet and the range of products, these tests did not have statistical generalizability and have been outdated by the substantial turnover in both filtering products and Internet content itself. For general consumers, filter accuracy, adaptability, updating become important points over the life of the product. Have filters become more subtle, worse or more pervasive with time? Are they targeting new key terms and web sites? Have once-banned terms become acceptable? Does a culturally sensitive product purchased in 1996 still work acceptably today? Such information must be solid and credible to industry, academe and government policy-makers. Certainly it must earn the attention of those who build and promote filtering products. Some key issues:

 

·        Internet Monitoring: As suggested above, this task is difficult and fault-prone for the makers of filtering products. It will not be much easer for those who review the action of these products on the web. Language, images, hyperlinks, motion and sounds all present barriers to filters that commonly rely on text and simple, categorical meanings.  We know, too, that web pages change swiftly with time. Though time “slices” or snapshots of the Internet are becoming available through The Internet Archiving Project, the data files are massive and difficult to manipulate.[29] These are huge problems that partly explain the lack of comprehensive filter testing.

 

·        Analytic Efforts: How should websites be categorized? How can one efficiently manipulate huge datasets efficiently and for a tolerable cost? How will these tools evolve as the Internet and its supporting industry change with time? In short, the web awaits standards of description that have stability and wide acceptance, yet are flexible enough to catch new trends.

 

·        Benchmark Test Sets: How would filters be fairly and consistently tested as gateways into the Internet? Can consumer groups, filtering advocates and critics produce test runs that show differences among search engines, global shifts in filter terms or phrases, or in websites “banned” with URL or system blocking?

 

·        Policy Review:  Numerous websites have appeared that promise a “safe harbor” to children. Contents and links on each have been pre-selected as safe or fulfilling particular moral mandates.  Recently, a score of key Internet service providers have organized GetNetWise.org, a site centralizing information about filtering and giving assistance to those wanting to try out these tools. Supervised by the largest Internet providers like America Online, software providers like CyberPatrol and a number of civic organizations such as the US Chamber of Commerce and the Internet Content Rating Association, the site is potentially a powerful force in the adoption and use of filters. As of this writing, the organization is working out its acceptable use policies.[30] Clear policy is needed to alert those choosing filters to a sponsoring organization’s pet themes and possible biases – and to stand critical review by groups negatively affected.

 

Will assessment aid come from without? Software manufacturers serve themselves in sidestepping evidence of their products’ clumsiness. Government seems ready to embrace this software – faulty or not – as an alternative to first amendment tangles and constituent pressure. And increasingly large Internet service providers seek easy, low maintenance solutions to their oversight responsibilities.  Few of the proposals mandating filtering or blocking consider the quality of software to be used in much detail, if at all. It is probably from consumer groups, academics and organizations who stand to be harmed by the status quo that useful criticism will come.

 

A Growing Threat to Minority Voices? Multicultural content on the web has grown as improved access and training broaden those able to create websites. The Internet affords a content diversity that possibly has not been seen since the birth of radio in the 1920s. As media historians know, much of radio’s original variety – from churches, schools and community groups – was lost by early government favor for commercial interests who went on to homogenize content to a handful of profitable formulas for mainstream audiences.[31]  As web commerce becomes dominant – again assisted by Federal government policies – and there is increasing resort to filters as a way of soothing mainstream fears over content, a similar risk arises.

 

Recently, the US Senate passed a bill requiring all Internet service providers (ISPs) with more than 50,000 subscribers to offer upstream filtering.[32] Other bills have been introduced in Congress to mandate filters in libraries and schools.[33] Regulators – from legislatures to the courts – find it difficult to categorize the novel character of the web, complicating policies for socially constructive management of the Internet. Is the web broadcast, telephone or publication? Complexly, it is all of these, yet not any one. The strength of the Internet is in its mutability as multiple media, going quickly from one form to another as a one- or two-way device available to most anyone with modest equipment and a telephone line. With about 40 million Internet-connected US households (1999),[34] there is a mass reach platform for thousands of groups denied access to costly commercial media like television. Just as importantly, it is a way for minority opinion to reach out across great distances to gather their community on a world scale. Filtering, blocking and monitoring – absent adequate disclosure – stall these efforts, handing control of cyberspace to commercial forces or special interests that owe little save to their investors and content ideologues.

 

Notes



[1] Reno v. American Civil Liberties Union, 521 U.S. __, 117 S. Ct. 2329 (1997).

[2] 47 U.S.C. Sec. 231.

[3] ACLU, Censorship In a Box: Why Blocking Software is Wrong for Public Libraries. Internet document. http://www.aclu.org/issues/cyber/box.html#appendix1. 1998.

[4]Shannon Buggs. “Wonders of Web come with caveat for employers: Free access worries employers.” NandoNet. Feb 7, 2000.

[5]Family-Based Filtered Internet Service Providers,” Citizen-Link Research Papers. 11 February, 2000. at http://www.family.org/cforum/research/papers/a0002551.html

[6] Amanda Long. “More ISP Consolidation Expected” Financial Post. 11 Nov. 1997.

[7] Yardena Arar. “How to avoid getting chewed up when your ISP is swallowed” PCWorld Online. Internet document,  http://japan.cnn.com/TECH/computing/9905/10/isp.idg/, 11 May 1999.

[8] Reuters. “Prosecutors Appeal Somm’s Case.” 3 June 1998.

[9] ACLU. “Online Censorship in the States” Internet document http://www.aclu.org/issues/cyber/censor/stbills.html

[10] ACLU. Testimony Before the US Senate Committee on Commerce, Science and Transportation, Hearing on Internet Indecency, February 10, 1998. Internet document.

[11] GLAAD. Access Denied-2. 1999, p.

[12] Global Internet Liberty Campaign. Member Statement to the Internet Content Summit, Munich, Germany, September 9-11th, 1999. Internet document, http://www.gilc.org/speech/ratings/gilc-munich.html

[13] Op. Cit. Access Denied, p.

[14] The Censorware Project. Passing Porn, Banning the Bible: N2H2’s Bess in Public Schools. 1999. Internet document.

[15] Brewster Kahle. “Preserving the Internet” Scientific American, March 1997, p. 82. Available as an Internet document, http://www.archive.org/.

[16] Ibid.

[17] Anne Bubnic.. “Parent Control Resources.”  SafeKids.com,  May, 1999. at http://www.safekids.com/filters.htm

[18] GetWiseNet. “Tools for Families”, 1999 at  http://www.getnetwise.org/tools/

[19] “Key websites for parents” AT&T Learning Network at http://www.att.com/learningnetwork/family.html. See also: Family Guidebook. “Summary of Features of Filtering Software” on http://www.familyguidebook.com/charts.html.

[20] See “Uncensored Words List” at www.searchwords.com . This site scores the frequency of use for key words. Among the “Top 100” are many parents find objectionable.

[21] See “Tools for Families” www.GetNetWise.org.

[22] Culnan, M. (1999). Georgetown Internet Privacy Policy Study. Available: http://www.msb.edu/faculty/culnanm/gippshome.html

[23] Landesberg, M. M., L. (1999). Self-Regulation and Privacy Online:  A Report to Congress . Washington, D.C.: Federal Trade Commission.

[24] Paul Hagan with Stan Dolberg. “Privacy Wake-up Call” Forrester Brief. September, 1999.

[25] Electronic Privacy Information Center (EPIC). Critical Infrastructure Protection and Endangerment of Civil Liberties: An Assessment on the President’s Commission on Critical Infrastructure Protection. Washington, DC, 1998. Internet document  http://www.epic.org/security/infowar/epic-cip.html

[26] “E-Mail Poll Results.” Network Computing. 31 May 1999. on http://www.networkcomputing.com/forms/results.html

[27] Jeanette Burkette and John Bowes. The Transaction Snarl: Can eCommerce be Tamed Across Regulatory Frontiers? Paper presented to he annual meeting of the International Association for Mass Communication Research, Leipzig, July 24-27th, 1999.

[28]Karen Schneider. “The Internet Filter Assessment Project: Learning from TIFAP.” September 1997. at http://www.bluehighways.com/tifap/

[29] See The Internet Archive Project, http://www.archive.org/

[30] GetNetWise.org. “Authors & Contributors” Safety Guide. Internet document, http://www.getnetwise.org/safetyguide/authors.shtml

[31]Robert W. McChesney. Telecommunications, Mass Media, and Democracy : The Battle for the Control of U.S. Broadcasting, 1928-1935. New York: Oxford, 1993.

[32] Jesse Burst. “The Dirty Secret About Web Filters.” ZDNet Anchor Desk. 26 May 1999. on

http://www.zdnet.com/anchordesk/story/story_3429.html

[33]“House Adopts Frank’s Internet Filtering Bill.” Tech Law Journal. 18 June 1999. on http://www.techlawjournal.com/censor/19990618.htm

[34] Web Trend Watch. “Growing ‘Ad Hoc’ Convergence.” E&P Online. 23 June 1999 on http://www.mediainfo.com/ephome/news/newshtm/webnews/wt062399.htm