Under Section 512(c)(1)(A), an online service provider loses its "safe harbor" when it has “actual knowledge” of infringing activity, is “aware of facts or circumstances from which infringing activity is apparent,” and “upon obtaining such knowledge or awareness, [does not] act expeditiously to remove, or disable access to, the material.” (I've ignored (c)(1)(B) and (C).) The difficulty comes in articulating the difference between a generalized knowledge standard, which courts have rejected, and what facts or circumstances might make a provider "aware."
The AIPLA brief accuses the CCBill and YouTube courts of erasing the distinction between the "actual knowledge" and "awareness" prongs by "extending the degree of knowledge required for 'awareness' beyond that established by Congress..." For the "awareness" standard, the DMCA's legislative history refers to "red flags" that might put a service provider on notice, which presumably should prompt the service provider to respond by removing the infringing material. To me, the AIPLA's position on "red flags" is analogous to the generalized knowledge and similar theories rejected in YouTube, CCBill and UMG Recordings v. Veoh. Those courts note the difficulty of monitoring uploads, irrelevance of fact fights in light of clear law, and more important, that Congress has placed the burden of policing on copyright owners.
I agree with the AIPLA that the "awareness" prong needs clarification but rejecting CCBill is not how to get it. Confusing things more, the AIPLA also agrees that the awareness prong still requires identification of specific infringements. How does that not erase the distinction between the two prongs? The legislative history relied on by the AIPLA to dispute CCBill reads:
"The important intended objective of this standard is to exclude sophisticated “pirate” directories—which refer Internet users to other selected Internet sites where pirate software, books, movies, and music can be downloaded or transmitted—from the safe harbor. Such pirate directories refer Internet users to sites that are obviously infringing because they typically use words such as “pirate,” “bootleg,” or slang terms in their uniform resource locator (URL) and header information to make their illegal purpose obvious to the pirate directories and other Internet users. Because the infringing nature of such sites would be apparent from even a brief and casual viewing, safe harbor status for a provider that views such a site and then establishes a link to it would not be appropriate."
Besides the technological and practical irrelevance of that language, written in 1998, the conflict in the brief and tension in the DMCA come from Section 512(m). 512(m) says that service providers do not have a duty to investigate for instances of infringement. In CCBill, the Ninth Circuit held that even though the defendants provided services to "illegal.net" and "stolencelebritypics.com," they were not "aware of apparent infringing activity." The court concluded,
"When a website traffics in pictures that are titillating by nature, describing photographs as 'illegal' or 'stolen' may be an attempt to increase their salacious appeal, rather than an admission that the photographs are actually illegal or stolen. We do not place the burden of determining whether photographs are actually illegal on a service provider."
Consistent with 512(m), and the way that the DMCA places the initial burden on copyright owners to identify specific instances of infringement and notify service providers to take them down, the Ninth Circuit properly refused to shift the initial burdens onto the service providers. How can it be known, after a "brief and casual viewing," whether or not online content is infringing anyways? And, what rational service provider, in light of 512(m), would risk their safe harbor status by investigating further? In YouTube, Judge Stanton quotes Veoh to remind us that “CCBill teaches that if investigation of ‘facts and circumstances’ is required to identify material as infringing, then those facts and circumstances are not ‘red flags.’”
The AIPLA brief goes on to argue that the CCBill standard "elevat[es] the lack of a duty to investigate to such extreme proportions, and adopting such a limited definition of ‘awareness’ that ignores even the explicit red flags identified by Congress… effectively eliminate[s] any viable distinction between ‘awareness’ and ‘actual knowledge.’” The brief goes on to say that "[a]lthough awareness of specific instances of infringement are required, the service provider may not ignore the obvious" without exhibiting willful blindness that might eliminate the safe harbor. Perhaps the two cases blur the line between "awareness" and "actual knowledge" by failing to articulate clear standards of what facts and circumstances a service provider must be aware of to lose its safe harbor. However, the AIPLA's acknowledgment of 512(m) is disingenuous in light of their statement that service providers "may not ignore the obvious." Further, their position is circular: There is no duty for the provider to investigate, unless something is obvious, and specific identification of infringement is still required, but there is no duty for the provider to investigate... It would also place a burden on service providers that courts and Congress have squarely rejected.
All in all, the way that 512(c) is written suggests that the "awareness" standard wouldn’t even matter so long as the service provider “acts [or responds] expeditiously to remove, or disable access to, the material…" The outcomes of the cases have confirmed this view. The CCBill facts provide support for the good ol' law school maxim that bad facts create bad law. In light of that, we should not allow snippets of legislative history to make bad policy. As lawyers often note when arguing against examples from legislative history, if Congress wanted triggers like "pirate" or "bootleg" in a domain name to constitute "awareness" that infringing activity is or might be taking place, why didn't they write that into the law? And what's next, naughty words in usernames or online pseudonyms as proxies for individuals who are or might be repeat infringers? If you care about free speech, common sense and keeping online commerce humming along, you'll agree that the courts have got it right.