Web “security” provider Sucuri helps online pirates cloak criminal activities
As piracy has evolved and enforcement efforts increase, pirate entrepreneurs have been forced to set up shop in far offshore to avoid the long arm of U.S. law. What’s troubling is how U.S. companies help them evade the law by providing cover for their illegal piracy business while at the same time pocketing their own dirty profits in the process.
Follow along as I take an obstacle course–the type creators face every day trying to protect their work–to see the way U.S. companies–in this case GoDaddy owned Sucuri–help criminals cloak their activities and keep their illegal sites operating smoothly.
Start the hunt with Google Search
While Google claims to have cleaned up its act, the reality is that with a single search I quickly found a website featuring a cache of pirated movies. It wasn’t difficult.
No surprise, the 2nd Google result led directly to a site offering a cornucopia of pirated popular lesbian-themed films and television shows, both Hollywood and indie fare.
I chose an indie feature and with a click began my journey through the maze to uncover where the stream for the stolen movie was actually hosted.
Finding the actual source code is a huge pain….I was forced to click through a series of popup ads–after all, that’s how these online pirates make money. Finally, I used Firefox’s web developer tools to scan through the source code as the movie streamed and eventually uncovered the pirate link I was looking for.
When I clicked that link, I ended up at the actual full stream for the film.
You find the source. Now what?
Turns out the file is hosted on site called “gounlimited.to” but isn’t much help. As I discovered, and Torrent Freak has previously noted, this particular pirate website brags that it ignores the DMCA. and uses that fact as a selling point. Per Torrent Freak, this isn’t the operators only rodeo either, “Faced with a lack of stable ‘takedown resistant’ hosting providers to stream videos from, Bader decided to start one of his own, GO Unlimited.”
Of course, like all piracy sites, this operation is in the business of making money off stolen goods so its content is populated thanks to minions worldwide enticed by a cash rewards with payouts based on the number of eyeballs each illegal upload attracts. It’s the typical cyberlocker scenario. For the record, I will also be contacting PayPal to ask why they remain affiliated with this criminal operation, but I digress….
Since Go Unlimited brags about ignoring the DMCA and offers no contact information, the next step is to investigate registrar and host. The .to domain is popular among shady sites for a reason and information isn’t listed in the typical WHOIS database. The .to domain offers its own search, but offers little in the way of actual information. The registrar cares little about criminal enterprises.
What next? Turns out a U.S. based company, GoDaddy’s Sucuri is listed as the IP provider. Sucuri does business with a pirate website, but explains that its not responsible in its disclaimer (poor spelling aside) this way:
The Sucuri Firewall is a passthrough proxy WAF & CDN service. Sites using our service will point their DNS records at Sucuri IP’s, but all content is actualy (sic) hosted outside of the Sucuri network.
The excuse that they don’t “host” the content is a bit weak considering that the pirated data does flow through Securi servers on their way to the end user. Essentially the excuse goes like this, “We only provide the ingredients used to bake the cake, not the finished cake.”Pretty lame excuse. While perhaps legal, it certainly doesn’t seem moral. The question is, WHY do we allow U.S. companies to do business with sites that ignore U.S. copyright law?
In a further insult, Sucuri lists publisher Harper Collins as one of its customers. Ironic that Sucuri PR folks see no conflict of interest in servicing a piracy operator aside one of its potential victims. (Note book publishers and authors are suffering mightily due to e-book piracy).
So what’s the solution? Once again the DMCA needs to be updated for the 21st century. I’ve written about this issue extensively in the past, and you can read those thoughts here. Clearly, third parties who are knowingly complicit providing infrastructure for criminal enterprises need to be held to greater account when a client ignores the law.
Once again a possible path forward can be found by looking at the European Union. Last month a court in Italy ruled against Cloudflare, ordering the company to cease doing business with an illicit website.
The courts used the EU’s Electronic Commerce Directive 2000/31/EC, to justify its judgement against Cloudflare. The law cited provides a legal “framework” for electronic commerce. It’s time for U.S. lawmakers to enact similar safeguards for U.S. creators. Participating as a for profit player in the piracy ecosystem should not be a legal business model in the United States.
Google takes users on a crooked path to another charity that apparently pays for placement
Amazon.com offers its users a small way to give back to a favorite charity by using the Amazon Smile portal instead of the regular site. When I make purchase via Amazon, I’m happy to know that .5% of my purchase goes to a charity of a my choice. The key to remember, to have a donation made, is to login through the Smile portal.
The other day I wanted to buy something and didn’t recall the exact Amazon Smile URL, so used Google search to direct me to the proper link. This would have been fine except for the fact that Google is not really a benign search engine. It’s actually set up as a gigantic for profit enterprise which allows its customers to purchase advertising and influence. Apparently search results for Amazon’s charity portal are apparently also considered fair game.
Google’s opaque results set up to fool users into donating to Boston Medical Center
I discovered this evil design when I searched Google for the “Amazon Smile” site. The top search result took me there all right–but with one big caveat. Upon login, the Google search link nefariously set up a window suggesting I switch from my current charity choice to Boston Medical Center instead. Now, I have no problem with Boston Medical Center earning funds from Amazon users who knowingly choose itas their charity of choice. In fact the medical center is a very worthy charity as they serve many in need of serious medical care , but I do have a problem if money has changed hands in a opaque effort to push me, and others, into switching. Is that fair to my original choice of charity?
Here’s how it worked, as I illustrate using the graphic below: Click the link in the top result and you’re taken to the Amazon Smile login page. All’s good–until you login in. To proceed, you must then either select Boston Medical Center (the highlighted option) as your new charity or confirm your previous charity. My question is why should I be forced to (re)confirm my original charity choice?
I caught on to the scheme quickly, wondering how in hell Boston Medical Center was butting its way into my charity preference, but not every web user will. Let’s face it, some people, particularly older folks, may just click through innocently. After all, the “Yes, change my charity option” is the choice that’s highlighted. Amazon offers this explanation as to why you are being given an option (even though you didn’t think you asked):
You clicked a link from an email or another website indicating you want to set a different charitable organization from the one you selected during an earlier visit. You need to either confirm that you want to change your charity, or keep your previously selected charity.
Problem is, I did not do that. I used Google search and trusted its results. I assume that Amazon remains unaware this is happening.
So, just to summarize. In this world, it’s fair game to siphon off charity dollars that should be directed elsewhere? Google receives payment to rig Amazon Smile search results in such a way that visitors may be tricked into switching. Nothing charitable about that approach is there? In fact, it’s a pretty damn skeevy way to attract donations. Skeevy is, however, Google’s middle name.
For the record, Google claims it labels its ads in search results:
When people search on Google for something they want, they find two types of results: search results and ads. Search results appear as links on search results pages and aren’t part of Google’s advertising programs. Ads appear under an “Ads” label and may be placed in several locations around the free search results.
Nice disclaimer, but when it comes my search for “Amazon Smile” there was NO indication that this result and corresponding link were paid product placements. I guess Google uses different terminology its practice of nesting duplicitous links? This switcheroo is also devious on both Google and Boston Medical’s part. If you want legitimate search results, untainted by the profit motive, Google is truly not the place to go. I wonder if any attorney generals are examining this practice? Perhaps I should ring up the Massachusetts AG and ask?
Bing offers legit search results at the top
For comparison sake I checked out Bing and Yahoo. Bing wins the award for operating as an actual search engine, at least in this case. The Amazon Smile portal is the first result…no questions asked. It’s simple and easy to click and go to the page you actually want.
Yahoo does place a couple advertisements at the top of its results BUT at least they are clearly marked advertisements. Although it seems odd that weather.info is allowed to piggy back on Amazon’s name. After a small spacer, the first genuine search result is a legit linking to the Amazon Smile web portal and doesn’t try and trick you into selecting a different charity.
It’s no surprise that Google manipulates its search algorithm to deploy advertising to fill its coffers, but to do so in such a opaque and deceitful way is pretty awful. Once again Google’s reputation for greed is shown to be well deserved.
Apparently even those who would like to do a little good in the world by donating through Amazon Smile are considered fair game for exploitation by the Google team. Profits above people at every turn. Shame on Google.
YouTube is a money machine for Google. While actual numbers are hard to come by, it’s estimated that the online video hub brings in upwards of $15 billion annually.With that much money at stake, it’s not surprising that its business model continues to put profits over people.
YouTube content is often offensive, violent and awful
In 2015, following the on-air murder of a television reporter and her cameraman, gruesome videos of the event were quickly posted on YouTube with with ads alongside. I wrote a post about it at the time:
Ads for Chile’s Restaurant and Aria Resort appear on clip of WDBJ murder.
It’s no secret that YouTube slaps advertising on pretty much anything without regard for subject matter or ownership, but making money off of last week’s on-air murder of WDBJ-TV reporter Alison Parker and her cameraman Adam Ward is a new low. A source tipped me off to the fact that a number of opportunistic (and shameless) YouTube “partners” have uploaded and monetized clips of both the station’s live broadcast and the video taken, (and uploaded to Twitter) by the deranged murderer as he executed the two journalists during a televised live-shot for the morning news.
4 years later, it’s as if nothing as changed.
This week Andy Parker, the murdered reporter’s father, wrote an emotional piece for the Washington Post describing how YouTube/Google’s business practices continue to damage his family to this day.
After establishing a foundation to support arts programs for underserved children in Virginia and advocating for gun safety to prevent events like that which took his daughter’s life, his family has become the target of conspiracy theorists.
They have taken the gruesome footage of my daughter’s murder, edited it into videos selling these lies and flooded YouTube with hate-filled diatribes maligning my family.
The vitriol directed at me and my family has been unbearable. So I was outraged to discover that recommendation algorithms for YouTube and its parent company, Google, have bolstered these conspiracy theories.
Parker puts blame for this clearly at the Google’s door, “As much as I want to blame the sick creators for the pain I feel, I blame Google even more. By surfacing this content and profiting from the data Google collects from those who view it, Google is monetizing Alison’s death and our family’s pain. “
Of course the Parker family’s experience is only example one in a long list of bad behavior by YouTube. In the past, the company has monetized everything from terrorist training videos to scenes promoting the sexual exploitation of children.
Algorithms making kiddie porn easy to access and money for YouTube
Just this past month, YouTube’s algorithms have come under more direct fire for “facilitating the sexual exploitation of minors” after YouTuber Matt Watson posted a video demonstrating how YouTube’s suggested videos took users to a series of videos showing children in various states of undress which featured comments with links to child pornography could be found. Watson told ArsTechnica:
“YouTube’s recommended algorithm is facilitating pedophiles’ ability to connect with each other, trade contact info, and link to actual CP [child pornography] in the comments,” Watson reported. “I can consistently get access to it from vanilla, never-before-used YouTube accounts via innocuous videos in less than ten minutes, in sometimes less than five clicks.”
Wired also ran an extensive report on YouTube’s ongoing porn video monetization problem and how pedophiles use its comment section as a virtual bulletin board.
Videos of children showing their exposed buttocks, underwear and genitals are racking up millions of views on YouTube – with the site displaying advertising from major cosmetics and car brands alongside the content.
Comments beneath scores of videos appear to show paedophiles sharing timestamps for parts of the videos where exposed genitals can be seen, or when a child does the splits or lifts up their top to show their nipples. Some of the children in the videos, most of whom are girls, appear to be as young as five. Many of the videos have hundreds of thousands, if not millions of views, with hundreds of comments.
Of course, when confronted with such evidence Google belatedly responds, but only in the short term. For a company with such massive reach and resources, why can’t safeguards be put in place to prevent such behavior? They certainly have the means. What the company lacks is the will. Clearly, for Google/YouTube, it pays to look the other way.
YouTube prefers money to the moral high road
We’ve seen this YouTube’s approach to online piracy of films and music for years and despite repeated calls for change, we’ve only seen these insidious tendrils of exploitation for the sake of profit continue to grow, not recede. Will Washington ever wake up? Well, maybe.
Today Senator Elizabeth Warren announced that if she’s elected president she will break up the big tech companies like Google and Facebook. In Medium post published today describing her plan she characterized the issue this way:
As these companies have grown larger and more powerful, they have used their resources and control over the way we use the Internet to squash small businesses and innovation, and substitute their own financial interests for the broader interests of the American people. To restore the balance of power in our democracy, to promote competition, and to ensure that the next generation of technology innovation is as vibrant as the last, it’s time to break up our biggest tech companies.
Warren’s proposal is an important beginning. It’s long past time for the U.S. government to take action against a tech industry that has managed to avoid any semblance of regulation by repeated the tired mantra that reasonable regulation would “stifle innovation.” In fact, regulating the industry would do the opposite. It would create a fair and sustainable marketplace not dependent on the exploitation of others for its success.
Innovation takes many forms. The version of “innovation” created and currently promoted by the likes of YouTube is not necessarily one a civilized society should aspire to.
U.S. firms enable scammers to bait consumers and steal personal info
Spam and scams have become a way of life. Every day my in-box is full of emails warning that my Apple, PayPal or Wells Fargo credentials have been compromised and instructing me to click a link to restore my good standing. Of course, I’m well aware these are scams but clearly there are many who aren’t.
The same thing holds true with websites. It’s a well-known fact that for many–if not most– piracy peddlers, online malware supplies their lifeblood, their income. The Digital Citizens Alliance* just release a new study highlighting the role U.S. companies are playing in support of this scourge.
In the case of content theft, the pirated movies, TV shows and music is the draw. Bad actors dangle free content, consumers take the bait, and the end result is millions of identities at risk and billions of dollars stolen. Then these computers are taken over to wreak more havoc, causing a nightmare for everyone from Internet users to advertisers who get defrauded, to corporations blackmailed into paying off hackers who threaten to use those rogue computers to launch attacks.
While these rogue sites are run by overseas operators, the DCA found that many are hosted by companies headquartered here in the United States. The study singles out two U.S.-based firms, CloudFlare and Hawk Host as routinely offering up services to malware infested sites.
CloudFlare helps these criminals mask their locations by shrouding their network hosting and domain info:
In order to utilize CloudFlare’s CDN, DNS, and other protection services customers have to run all of their website traffic through the CloudFlare network. The end result of doing so is masked hosting information. Instead of the actual hosting provider, IP address, domain name server, etc., a Whois search provides the information for CloudFlare’s network.
When researchers at the DCA contacted CloudFlare for comment, they received the typical boiler-plate, we aren’t responsible for our customers response:
CloudFlare’s service protects and accelerates websites and applications. Because CloudFlare is not a host, we cannot control or remove customer content from the Internet. CloudFlare leaves the removal of online content to law enforcement agencies and complies with any legal requests made by the authorities. If we believe that one of our customers’ websites is distributing malware, CloudFlare will post an interstitial page that warns site visitors and asks them if they would like to proceed despite the warning. This practice follows established industry norms
-DCA
The DCA’s investigation into Hawk Host highlighted the same scenario. Use pirated films and music to attract visitors and entice them to download malware (before they can download the pirated content). The response from Hawk Host was somewhat different in that their tech support staff agreed that the malware sites reported by the DCA were indeed violating the companies terms of service and should be closed. According to the report:
After an exchange of information, Hawk Host agreed the sites did violate their policies and told Digital Citizens the sites would come down. Cody Robertson (Chief Technical Officer) said the sites “clearly violate our TOS / AUP.” He did add that it would be impossible for Hawk Host to audit all of the 100,000-plus sites they host and that they would continue to rely on abuse reports. Hawk Host’s swift action is an encouraging sign and Digital Citizens is hopeful that the company will continue to take steps to protect Internet users from malicious content.
This is a step in the right direction. For many websites, piracy is a means to and end and in order for win the fight against it, the problem must be tackled on many fronts from search, to infrastructure, to income. The threat of the public being victimized by malicious malware only adds to the damage done by online pirates. You can read the entire DCA report here.
The threat of malware could turn people away from piracy
Last week the Digital Citizens Alliance (DCA)* released a study that found websites offering free, pirated content were rife with malware. According to the report, 33% of content theft sites exposed users to malware. Every month 12 million U.S. visitors to these sites open themselves up to the theft of personal data, or worse.
To assess the impact that this malware threat might have on American’s web surfing habits the DCA conducted two surveys on December 10-13.
The first examined behavior and opinions of 1,000 Americans, while the second focused on 500 Americans aged 18-29 (an age group more likely to partake in piracy). The main takeaway–once people realize malware is a threat–is that respondents would be much less likely to visit these sites.
Fifty-three percent of Americans aged 18-29 acknowledge having visited content theft sites, nearly three times as much as the overall population.
Seventy percent said that they knew these websites illegally offered content, while 13 percent said they knew it was “wrong” but weren’t sure if it was illegal or not.
Sixty-three percent said that if visiting these content theft websites exposed them to malware they would steer clear of them in the future.
Figures for all age groups show an even great aversion to the malware risk with 82% reporting they’d steer clear of such websites. This, coupled with the growing influence (and traffic) of legit streaming sites like Netflix give some cause for optimism in the ongoing battle against online pirate profiteers. Below are more results from the survey.
*Disclosure-I’m a member of the Digital Citizens Alliance Advisory Board
Facebook has been promising for some time to introduce tools that would allow rights holders to automatically detect and remove pirated content from its pages.
The company has endured a lot of bad publicity around the freebooting of viral YouTube videos on its pages, but Facebook’s also long been a place where pirated movies and music found a cozy habitat. That is–until now. I’ve recently begun to utilize this tool to manage Facebook DMCA takedowns and wanted to share my first impressions, but first a bit of background.
First of all, I’m thrilled that Facebook, with all its resources, has finally begun to take copyright infringement seriously. In introducing the new tool last month the Facebook development team explained why the company had finally stepped up:
Video has become an important part of the Facebook experience for people around the world, due in large part to the amazing creativity we’re seeing from all kinds of video publishers.
To provide the best experience for everyone who watches, creates and shares videos on Facebook, we work with our community to understand which tools they want us to build. Based on this feedback, on top of the measures we already have in place, we’ve been building new video matching technology to further help rights owners protect the content they own.
Signing up is easy and the interface straightforward and simple to use
I found signing up for the rights manager tool to be relatively straightforward. You must have a page to link the rights manager to and I initially applied for, and was accepted into the program, by using our film’s Facebook page. Once I received approval I was able to upload a reference copy of our film (and trailer) to the Facebook rights manager dashboard. A trailer I’d uploaded to our page previously was also listed. From there, Facebook’s automated digital matching tools went to work.
Facebook’s Rights Manager dashboard is pretty straightforward
Easily upload and maintain a reference library of the video content they want to monitor and protect. Publishers can upload content libraries and publish live video as references for Rights Manager to check against, including videos they are not sharing publicly on Facebook. Rights Manager then monitors for potential infringement of that content across Facebook.
Create rules about how individual videos may be used. Publishers can set specific match rules to either allow or report copies of their videos based on criteria of their choosing—for example, how much content has been reused, where the matching video is located or how many views the matching video has received.
Identify new matches against protected content. Rights Manager’s dashboard surfaces any new matches against a publisher’s uploaded reference files and live video. On the dashboard, publishers can filter matches by time, date or view count, and then either report potential copyright infringement or allow the matching content to remain published.
Whitelist specific Pages or profiles to allow them to use their copyrighted content. Publishers can specify Pages or profiles that have permission to publish their protected content without being monitored for potential infringement.
Protect their reference library at scale with the new Rights Manager API. We’re rolling out an API for Rights Manager to improve bulk uploading for publishers and to allow media management companies to support partners in managing, monitoring and protecting their content across Facebook. You can find out more about the Rights Manager API here.
Facebook’s tech support is responsive and proactive in working to improve the system
Facebook asks for feedback in an effort to improve its rights manager tools
I do believe this type of fingerprinting technology will be an increasingly crucial tool as we move forward in the battle against online piracy on sites like Facebook, but as with any new offering, there are glitches.
The good news is that so far, Facebook’s technical support team is quite responsive and the company seems to be making a concerted effort to sort through issues and improve the tool’s operation. Any time you remove an item from the dashboard a window pops up soliciting feedback. I’ve also had a fair amount of helpful email correspondence with the support team and have found Facebook’s prompt and open response to my queries offers a welcome contrast the less-than-stellar support offered by a (popular) site that shall remain nameless.
As with any new tech, there are some glitches
I also set up a Rights Manager account for an independent film distributor I work for and in the process of uploading dozens of reference files have found the “matching” to be rather hit and miss. At this point Rights Manager seems to do a great job detecting the company’s opening logo (and music) but little else. What makes it even stranger is that the tool detects the distributors opening logo and music and then matches it to the wrong reference file. Obviously ALL the titles I’ve been uploading share the same opening sequence from the distributor but when it comes time to actually issue the takedown to remove the infringing (matched) content, it auto-populates the form with the film’s title, which in these instances is the wrong one.
Lots of early glitches with Facebook’s Rights Manager tools
I’ve also come across situations where a single film title is simultaneously listed has having matched multiple reference files to different titles, but NEVER the actual reference file for that particular film. Consequently, rather than send a DMCA notice with incorrect information, which would be illegal, I have chosen to wait for Facebook to sort out this particular glitch. This is where their responsive tech support will, hopefully, come in handy.
I’ve also found that there’s a lot of uploaded content that doesn’t really match anything. Perhaps a song is playing in the background that matches the film’s soundtrack, but it’s difficult to tell? At this point the system’s matching capabilities clearly need to be dialed in order to better weed out innocent content.
As it stands, I have been manually removing this erroneous matches from the dashboard, but that takes precious time, and efficiency is one reason this system was developed in the first place. For larger entities there are API tools, but for independent, smaller entities, it seems that utilizing the dashboard will be best route.
Users can create “match rules” to fine tune content matching
Some of the hiccups I’ve encountered thus far are likely simple bugs in the system, while others may well be user error. Fortunately, Facebook has created tools that allow publishers/creators to fine tune the matches based on length of time, territory and content type.
I plan to spend some time working through the reference files I’ve uploaded to create appropriate match rules in the hope that it will result in fewer false positives.
Will creators be able to make money from their videos and music?
There’s also the question of monetization. Will rights holders be able to earn money from copies of their work uploaded to Facebook? It’s likely at some point in the future, but first Facebook will need to fine-tune Rights Manager. They can’t afford to complicate a system that’s still for all practical purposes in beta mode.
Overall I’m pleased with Facebook’s effort. Yes, it’s overdue and yes, it’s not (yet) perfect but it is a huge step in the right direction and hopefully can serve as a model for other social media and video sites across the web looking to do a better job thwarting piracy.
As I’ve written previously, I firmly believe UGC sites of a certain size (like Facebook, Vimeo, YouTube, et al) should be required to offer this type of tech in order to qualify for safe harbor. Of course that assumes the creaky old DMCA will be revised and the odds of that actually happening any time soon….well, I’ll leave that discussion for another day. In the meantime, I’m going to get busy on Facebook and upload some more reference files. So far I’ll give the new system a thumbs up!
As an indie film and broadcast journalism veteran, I'll share my perspectives on issues of interest to the creative community and beyond--Ellen Seidler