Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7007 Articles
article-image-security-researcher-publicly-releases-second-steam-zero-day-after-being-banned-from-valves-bug-bounty-program
Savia Lobo
22 Aug 2019
6 min read
Save for later

Security researcher publicly releases second Steam zero-day after being banned from Valve's bug bounty program

Savia Lobo
22 Aug 2019
6 min read
Updated with Valve’s response: Valve, in a statement on August 22, said that its HackerOne bug bounty program, should not have turned away Kravets when he reported the second vulnerability and called it “a mistake”. A Russian security researcher, Vasily Kravets, has found a second zero-day vulnerability in the Steam gaming platform, in a span of two weeks. The researcher said he reported the first Steam zero-day vulnerability earlier in August, to its parent company, Valve, and tried to have it fixed before public disclosure. However, “he said he couldn't do the same with the second because the company banned him from submitting further bug reports via its public bug bounty program on the HackerOne platform,” ZDNet reports. Source: amonitoring.ru This first flaw was a “privilege-escalation vulnerability that can allow an attacker to level up and run any program with the highest possible rights on any Windows computer with Steam installed. It was released after Valve said it wouldn’t fix it (Valve then published a patch, that the same researcher said can be bypassed),” according to Threatpost. Although Kravets was banned from the Hacker One platform, he disclosed the second flaw that enables a local privilege escalation in the Steam client on Tuesday and said that the flaw would be simple for any OS user to exploit. Kravets told Threatpost that he is not aware of a patch for the vulnerability. “Any user on a PC could do all actions from exploit’s description (even ‘Guest’ I think, but I didn’t check this). So [the] only requirement is Steam,” Kravets told Threatpost. He also said, “It’s sad and simple — Valve keeps failing. The last patch, that should have solved the problem, can be easily bypassed so the vulnerability still exists. Yes, I’ve checked, it works like a charm.” Another security researcher, Matt Nelson also said he had found the exact same bug as Kravets had, which “he too reported to Valve's HackerOne program, only to go through a similar bad experience as Kravets,” ZDNet reports. He said both Valve and HackerOne took five days to acknowledge the bug and later refused to patch it. Further, they locked the bug report when Nelson wanted to disclose the bug publicly and warn users. “Nelson later released proof-of-concept code for the first Steam zero-day, and also criticized Valve and HackerOne for their abysmal handling of his bug report”, ZDNet reports. https://twitter.com/enigma0x3/status/1148031014171811841 “Despite any application itself could be harmful, achieving maximum privileges can lead to much more disastrous consequences. For example, disabling firewall and antivirus, rootkit installation, concealing of process-miner, theft any PC user’s private data — is just a small portion of what could be done,”  said Kravets. Kravets demonstrated the second Steam zero-day and also detailed the vulnerability on his website. Per Threatpost as of August 21, “Valve did not respond to a request for comment about the vulnerability, bug bounty incident and whether a patch is available. HackerOne did not have a comment.” Other researchers who have participated in Valve’s bug bounty program are infuriated over Valve’s decision to not only block Kravets from submitting further bug reports, but also refusing to patch the flaw. https://twitter.com/Viss/status/1164055856230440960 https://twitter.com/kamenrannaa/status/1164408827266998273 A user on Reddit writes, “If management isn't going to take these issues seriously and respect a bug bounty program, then you need to bring about some change from within. Now they are just getting bug reports for free.” Nelson said the Hacker One “representative said the vulnerability was out of scope to qualify for Valve’s bug bounty program,” Ars Technica writes. Further, when Nelson said that he was not seeking any monetary gains and only wanted the public to be aware of the vulnerability, the HackerOne representative asked Nelson to “please familiarize yourself with our disclosure guidelines and ensure that you’re not putting the company or yourself at risk. https://www.hackerone.com/disclosure-guidelines.” https://twitter.com/enigma0x3/status/1160961861560479744 Nelson also reported the vulnerability directly to Valve. Valve, first acknowledged the report and “noted that I shouldn’t expect any further communication.” He never heard anything more from the company. In am email to Ars Technica, Nelson writes, “I can certainly believe that the scoping was misinterpreted by HackerOne staff during the triage efforts. It is mind-blowing to me that the people at HackerOne who are responsible for triaging vulnerability reports for a company as large as Valve didn’t see the importance of Local Privilege Escalation and simply wrote the entire report off due to misreading the scope.” A HackerOne spokeswoman told Ars Technica, “We aim to explicitly communicate our policies and values in all cases and here we could have done better. Vulnerability disclosure is an inherently murky process and we are, and have always been, committed to protecting the interests of hackers. Our disclosure guidelines emphasize mutual respect and empathy, encouraging all to act in good faith and for the benefit of the common good.” Katie Moussouris, founder and CEO of Luta Security, also said, “Silencing the researcher on one issue is in complete violation of the ISO standard practices, and banning them from reporting further issues is simply irresponsible to affected users who would otherwise have benefited from these researchers continuing to engage and report issues privately to get them fixed. The norms of vulnerability disclosure are being warped by platforms that put profits before people.” Valve agrees that turning down Kravets’ request was “a mistake” Valve, in a statement on August 22, said that its HackerOne bug bounty program, should not have turned away Kravets when he reported the second vulnerability and called it a mistake. In an email statement to ZDNet, a Valve representative said that “the company has shipped fixes for the Steam client, updated its bug bounty program rules, and is reviewing the researcher's ban on its public bug bounty program.” The company also writes, “Our HackerOne program rules were intended only to exclude reports of Steam being instructed to launch previously installed malware on a user’s machine as that local user. Instead, misinterpretation of the rules also led to the exclusion of a more serious attack that also performed local privilege escalation through Steam.  In regards to the specific researchers, we are reviewing the details of each situation to determine the appropriate actions. We aren’t going to discuss the details of each situation or the status of their accounts at this time.” To know more about this news in detail, read Kravets’ blog post. You could also check out Threatpost’s detailed coverage. Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops A second zero-day found in Firefox was used to attack Coinbase employees; fix released in Firefox 67.0.4 and Firefox ESR 60.7.2 The EU Bounty Program enabled in VLC 3.0.7 release, this version fixed the most number of security issue
Read more
  • 0
  • 0
  • 22832

article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 34327

article-image-google-open-sources-an-on-device-real-time-hand-gesture-recognition-algorithm-built-with-mediapipe
Sugandha Lahoti
21 Aug 2019
3 min read
Save for later

Google open sources an on-device, real-time hand gesture recognition algorithm built with MediaPipe

Sugandha Lahoti
21 Aug 2019
3 min read
Google researchers have unveiled a new real-time hand tracking algorithm that could be a new breakthrough for people communicating via sign language. Their algorithm uses machine learning to compute 3D keypoints of a hand from a video frame. This research is implemented in MediaPipe which is an open-source cross-platform framework for building multimodal (eg. video, audio, any time series data) applied ML pipelines. What is interesting is that the 3D hand perception can be viewed in real-time on a mobile phone. How real-time hand perception and gesture recognition works with MediaPipe? The algorithm is built using the MediaPipe framework. Within this framework, the pipeline is built as a directed graph of modular components. The pipeline employs three different models: a palm detector model, a handmark detector model and a gesture recognizer. The palm detector operates on full images and outputs an oriented bounding box. They employ a single-shot detector model called BlazePalm, They achieve an average precision of 95.7% in palm detection. Next, the hand landmark takes the cropped image defined by the palm detector and returns 3D hand keypoints. For detecting key points on the palm images, researchers manually annotated around 30K real-world images with 21 coordinates. They also generated a synthetic dataset to improve the robustness of the hand landmark detection model. The gesture recognizer then classifies the previously computed keypoint configuration into a discrete set of gestures. The algorithm determines the state of each finger, e.g. bent or straight, by the accumulated angles of joints. The existing pipeline supports counting gestures from multiple cultures, e.g. American, European, and Chinese, and various hand signs including “Thumb up”, closed fist, “OK”, “Rock”, and “Spiderman”. They also trained their models to work in a wide variety of lighting situations and with a diverse range of skin tones. Gesture recognition - Source: Google blog With MediaPipe, the researchers built their pipeline as a directed graph of modular components, called Calculators. Individual calculators like cropping, rendering , and neural network computations can be performed exclusively on the GPU. They employed TFLite GPU inference on most modern phones. The researchers are open sourcing the hand tracking and gesture recognition pipeline in the MediaPipe framework along with the source code. The researchers Valentin Bazarevsky and Fan Zhang write in a blog post, “Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method, achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.” People commended the fact that this algorithm can run on mobile devices and is useful for people who communicate via sign language. https://twitter.com/SOdaibo/status/1163577788764495872 https://twitter.com/anshelsag/status/1163597036442148866 https://twitter.com/JonCorey1/status/1163997895835693056 Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube. Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research
Read more
  • 0
  • 0
  • 46578

article-image-twitter-and-facebook-removed-accounts-of-chinese-state-run-media-agencies-aimed-at-undermining-hong-kong-protests
Sugandha Lahoti
20 Aug 2019
5 min read
Save for later

Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests

Sugandha Lahoti
20 Aug 2019
5 min read
Update August 23, 2019: After Twitter, and Facebook Google has shutdown 210 YouTube channels that were tied to misinformation about Hong Kong protesters. The article has been updated accordingly. Chinese state-run media agencies have been buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. These ads, reported by Pinboard’s Twitter account were circulated by State-run news agency Xinhua calling these protesters as those "escalating violence" and calls for "order to be restored." In reality, Hong Kong protests have been called a completely peaceful march. Pinboard warned and criticized Twitter about these tweets and asked for its takedown. Though Twitter and Facebook are banned in China, the Chinese state-run media runs several English-language accounts to present its views to the outside world. https://twitter.com/pinboard/status/1162711159000055808 https://twitter.com/Pinboard/status/1163072157166886913 Twitter bans 936 accounts managed by the Chinese state Following this revelation, in a blog post yesterday, Twitter said that they are discovering a “significant state-backed information operation focused on the situation in Hong Kong, specifically the protest movement”.  They identified 936 accounts that were undermining “the legitimacy and political positions of the protest movement on the ground.” They found a larger, spammy network of approximately 200,000 accounts which represented the most active portions of this campaign. These were suspended for a range of violations of their platform manipulation policies.  These accounts were able to access Twitter through VPNs and over a "specific set of unblocked IP addresses" from within China. “Covert, manipulative behaviors have no place on our service — they violate the fundamental principles on which our company is built,” said Twitter. Twitter bans ads from Chinese state-run media Twitter also banned advertising from Chinese state-run news media entities across the world and declared that affected accounts will be free to continue to use Twitter to engage in public conversation, but not in their advertising products. This policy will apply to news media entities that are either financially or editorially controlled by the state, said Twitter. They will be notified directly affected entities who will be given 30 days to offboard from advertising products. No new campaigns will be allowed. However, Pinboard argues that 30 days is too long; Twitter should not wait and suspend Xinhua's ad account immediately. https://twitter.com/Pinboard/status/1163676410998689793 It also calls on Twitter to disclose: How much money it took from Xinhua How many ads it ran for them since the start of the Hong Kong protests in June and How those ads were targeted Facebook blocks Chinese accounts engaged in inauthentic behavior Following a tip shared by Twitter, Facebook also removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior as part of a small network that originated in China and focused on Hong Kong. However, unlike Twitter, Facebook did not announce any policy changes in response to the discovery. YouTube was also notably absent in the fight against Chinese misinformation propagandas. https://twitter.com/Pinboard/status/1163694701716766720 However, on 22nd August, Youtube axed 210 Youtube channels found to be spreading misinformation about the Hong Kong protests. “Earlier this week, as part of our ongoing efforts to combat coordinated influence operations, we disabled 210 channels on YouTube when we discovered channels in this network behaved in a coordinated manner while uploading videos related to the ongoing protests in Hong Kong,” Shane Huntley, director of software engineering for Google Security’s Threat Analysis Group said in a blog post. “We found use of VPNs and other methods to disguise the origin of these accounts and other activity commonly associated with coordinated influence operations.” Kyle Bass, Chief Investment Officer Hayman Capital Management, called on all social media outlets to ban all Chinese state-run propaganda sources. He tweeted, “Twitter, Facebook, and YouTube should BAN all State-backed propaganda sources in China. It’s clear that these 200,000 accounts were set up by the “state” of China. Why allow Xinhua, global times, china daily, or any others to continue to act? #BANthemALL” Public acknowledges Facebook and Twitter’s role in exposing Chinese state media Experts and journalists were appreciative of the role social media played in exposing those guilty and liked how they are responding to state interventions. Bethany Allen-Ebrahimian, President of the International China Journalist Association called it huge news. “This is the first time that US social media companies are openly accusing the Chinese government of running Russian-style disinformation campaigns aimed at sowing discord”, she tweeted. She added, “We’ve been seeing hints that China has begun to learn from Russia’s MO, such as in Taiwan and Cambodia. But for Twitter and Facebook to come out and explicitly accuse the Chinese govt of a disinformation campaign is another whole level entirely.” Adam Schiff, Representative (D-CA 28th District) tweeted, “Twitter and Facebook announced they found and removed a large network of Chinese government-backed accounts spreading disinformation about the protests in Hong Kong. This is just one example of how authoritarian regimes use social media to manipulate people, at home and abroad.” He added, “Social media platforms and the U.S. government must continue to identify and combat state-backed information operations online, whether they’re aimed at disrupting our elections or undermining peaceful protesters who seek freedom and democracy.” Social media platforms took an appreciable step against Chinese state-run media actors attempting to manipulate their platforms to discredit grassroots organizing in Hong Kong. It would be interesting to see if they would continue to protect individual freedoms and provide a safe and transparent platform if state actors from countries where they have a huge audiences like India or US, adopted similar tactics to suppress or manipulate the public or target movements. Facebook bans six toxic extremist accounts and a conspiracy theory organization Cloudflare terminates services to 8chan following yet another set of mass shootings in the US YouTube’s ban on “instructional hacking and phishing” videos receives backlash from the infosec community
Read more
  • 0
  • 0
  • 14895

article-image-apple-announces-webkit-tracking-prevention-policy-that-considers-web-tracking-as-a-security-vulnerability
Bhagyashree R
19 Aug 2019
5 min read
Save for later

Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability

Bhagyashree R
19 Aug 2019
5 min read
Inspired by Mozilla’s anti-tracking policy, Apple has announced its intention to implement the WebKit Tracking Prevention Policy into Safari, the details of which it shared last week. This policy outlines the types of tracking techniques that will be prevented in WebKit to ensure user privacy. The anti-tracking mitigations listed in this policy will be applied “universally to all websites, or based on algorithmic, on-device classification.” https://twitter.com/webkit/status/1161782001839607809 Web tracking is the collection of user data over multiple web pages and websites, which can be linked to individual users via a unique user identifier. All your previous interactions with any website could be recorded and recalled with the help of a tracking system like cookies. Among the data tracked include the things you have searched, the websites you visited, the things you have clicked on, the movements of your mouse around a web page, and more. Organizations and companies rely heavily on web tracking to gain insight into their user behavior and preferences. One of the main purposes of these insights is user profiling and targeted marketing. While this user tracking helps businesses, it can be pervasive and used for other sinister purposes. In the recent past, we have seen many companies including the big tech like Facebook and Google involved in several scandals related to violating user online privacy. For instance, Facebook’s Cambridge Analytica scandal and Google’s cookie case. Apple aims to create “a healthy web ecosystem, with privacy by design” The WebKit Prevention Policy will prevent several tracking techniques including cross-site tracking, stateful tracking, covert stateful tracking, navigational tracking, fingerprinting, covert tracking, and other unknown techniques that do not fall under these categories. WebKit will limit the capability of using a tracking technique in case it is not possible to prevent it without any undue harm to the user. If this also does not help, users will be asked for their consent. Apple will treat any attempt to subvert the anti-tracking methods as a security vulnerability. “We treat circumvention of shipping anti-tracking measures with the same seriousness as an exploitation of security vulnerabilities,” Apple wrote. It warns to add more restrictions without prior notice against parties who attempt to circumvent the tracking prevention methods. Apple further mentioned that there won’t be any exception even if you have a valid use for a technique that is also used for tracking. The announcement reads, “But WebKit often has no technical means to distinguish valid uses from tracking, and doesn’t know what the parties involved will do with the collected data, either now or in the future.” WebKit Tracking Prevention Policy’s unintended impact With the implementation of this policy, Apple warns of certain unintended repercussions as well. Among the possibly affected features are funding websites using targeted or personalized advertising, federated login using a third-party login provider, fraud prevention, and more. In cases of tradeoffs, WebKit will prioritize user benefits over current website practices. Apple promises to limit this unintended impact and might update the tracking prevention methods to permit certain use cases. In the future, it will also come up with new web technologies that will allow these practices without comprising the user online privacy such as Storage Access API and Privacy-Preserving Ad Click Attribution. What users are saying about Apple’s anti-tracking policy A time when there is increasing concern regarding user online privacy, this policy comes as a blessing. Many users are appreciating this move, while some do fear that this will affect some of the user-friendly features. In an ongoing discussion on Hacker News, a user commented, “The fact that this makes behavioral targeting even harder makes me very happy.” Some others also believe that focusing on online tracking protection methods will give browsers an edge over Google’s Chrome. A user said, “One advantage of Google's dominance and their business model being so reliant on tracking, is that it's become the moat for its competitors: investing energy into tracking protection is a good way for them to gain a competitive advantage over Google, since it's a feature that Google will not be able to copy. So as long as Google's competitors remain in business, we'll probably at least have some alternatives that take privacy seriously.” When asked about the added restrictions that will be applied if a party is found circumventing tracking prevention, a member of the WebKit team commented, “We're willing to do specifically targeted mitigations, but only if we have to. So far, nearly everything we've done has been universal or algorithmic. The one exception I know of was to delete tracking data that had already been planted by known circumventors, at the same time as the mitigation to stop anyone else from using that particular hole (HTTPS supercookies).” Some users had questions about the features that will be impacted by the introduction of this policy. A user wrote, “While I like the sentiment, I hate that Safari drops cookies after a short period of non-use. I wind up having to re-login to sites constantly while Chrome does it automatically.” Another user added, “So what is going to happen when Apple succeeds in making it impossible to make any money off advertisements shown to iOS users on the web? I'm currently imagining a future where publishers start to just redirect iOS traffic to install their app, where they can actually make money. Good news for the walled garden, I guess?” Read Apple’s official announcement, to know more about the WebKit Tracking Prevention Policy. Firefox Nightly now supports Encrypted Server Name Indication (ESNI) to prevent 3rd parties from tracking your browsing history All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users  
Read more
  • 0
  • 0
  • 21636

article-image-terrifyingly-realistic-deepfake-video-of-bill-hader-transforming-into-tom-cruise-is-going-viral-on-youtube
Sugandha Lahoti
14 Aug 2019
4 min read
Save for later

Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube

Sugandha Lahoti
14 Aug 2019
4 min read
Deepfakes are becoming scaringly and indistinguishably real. A YouTube clip of Bill Hader in conversation with David Letterman on his late-night show in 2008 is going viral where Hader’s face subtly shifts to Cruise’s as Hader does his impression. This viral Deepfake clip has been viewed over 3 million times and is uploaded by Ctrl Shift Face (a Slovakian citizen who goes by the name of Tom), who has created other entertaining videos using Deepfake technology. For the unaware, Deepfake uses Artificial intelligence and deep neural networks to alter audio or video to pass it off as true or original content. https://www.youtube.com/watch?v=VWrhRBb-1Ig Deepfakes are problematic as they make it hard to differentiate between fake and real videos or images. This gives people the liberty to use deepfakes for promoting harassment and illegal activities. The most common use of deepfakes is found in revenge porn, political abuse, and fake celebrities videos as this one. The top comments on the video clip express dangers of realistic AI manipulation. “The fade between faces is absolutely unnoticeable and it's flipping creepy. Nice job!” “I’m always amazed with new technology, but this is scary.” “Ok, so video evidence in a court of law just lost all credibility” https://twitter.com/TheMuleFactor/status/1160925752004624387 Deepfakes can also be used as a weapon of misinformation since they can be used to maliciously hoax governments, populations and cause internal conflict. Gavin Sheridan, CEO of Vizlegal also tweeted the clip, “Imagine when this is all properly weaponized on top of already fractured and extreme online ecosystems and people stop believing their eyes and ears.” He also talked about future impact. “True videos will be called fake videos, fake videos will be called true videos. People steered towards calling news outlets "fake", will stop believing their own eyes. People who want to believe their own version of reality will have all the videos they need to support it,” he tweeted. He also tweeted whether we would require A-list movie actors at all in the future, and could choose which actor will portray what role. His tweet reads, “Will we need A-list actors in the future when we could just superimpose their faces onto the faces of other actors? Would we know the difference?  And could we not choose at the start of a movie which actors we want to play which roles?” The past year has seen accelerated growth in the use of deepfakes. In June, a fake video of Mark Zuckerberg was posted on Instagram, under the username, bill_posters_uk. In the video, Zuckerberg appears to give a threatening speech about the power of Facebook. Facebook had received strong criticism for promoting fake videos on its platform when in May, the company had refused to remove a doctored video of senior politician Nancy Pelosi. Samsung researchers also released a deepfake that could animate faces with just your voice and a picture using temporal GANs. Post this, the House Intelligence Committee held a hearing to examine the public risks posed by “deepfake” videos. Tom, the creator of the viral video told The Guardian that he doesn't see deepfake videos as the end of the world and hopes his deepfakes will raise public awareness of the technology's potential for misuse. “It’s an arms race; someone is creating deepfakes, someone else is working on other technologies that can detect deepfakes. I don’t really see it as the end of the world like most people do. People need to learn to be more critical. The general public are aware that photos could be Photoshopped, but they have no idea that this could be done with video.” Ctrl Shift Face is also on Patreon offering access to bonus materials, behind the scenes footage, deleted scenes, early access to videos for those who provide him monetary support. Now there is a Deepfake that can animate your face with just your voice and a picture. Mark Zuckerberg just became the target of the world’s first high profile white hat deepfake op. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts.
Read more
  • 0
  • 0
  • 30340
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-vulnerabilities-in-the-picture-transfer-protocol-ptp-allows-researchers-to-inject-ransomware-in-canons-dslr-camera
Savia Lobo
13 Aug 2019
5 min read
Save for later

Vulnerabilities in the Picture Transfer Protocol (PTP) allows researchers to inject ransomware in Canon’s DSLR camera

Savia Lobo
13 Aug 2019
5 min read
At the DefCon 27, Eyal Itkin, a vulnerability researcher at Check Point Software Technologies, demonstrated how vulnerabilities in the Picture Transfer Protocol (PTP) allowed him to infect a Canon EOS 80D DSLR with ransomware over a rogue WiFi connection. The PTP along with image transfer also contains dozens of different commands that support anything from taking a live picture to upgrading the camera’s firmware. The researcher chose Canon’s EOS 80D DSLR camera for three major reasons: Canon is the largest DSLR maker, controlling more than 50% of the market. The EOS 80D supports both USB and WiFi. Canon has an extensive “modding” community, called Magic Lantern, an open-source free software add-on that adds new features to the Canon EOS cameras. Eyal Itkin highlighted six vulnerabilities in the PTP that can easily allow a hacker to infiltrate the DSLRs and inject ransomware and lock the device. Next, the users might have to pay ransom to free up their camera and picture files. CVE-2019-5994 – Buffer Overflow in SendObjectInfo  (opcode 0x100C) CVE-2019-5998 – Buffer Overflow in NotifyBtStatus (opcode 0x91F9) CVE-2019-5999– Buffer Overflow in BLERequest (opcode 0x914C) CVE-2019-6000– Buffer Overflow in SendHostInfo (opcode0x91E4) CVE-2019-6001– Buffer Overflow in SetAdapterBatteryReport (opcode 0x91FD) CVE-2019-5995 – Silent malicious firmware update Itkin’s team informed Canon about the vulnerabilities in their DSLR on March 31, 2019. Recently, on August 6, Canon published a security advisory informing users that, “at this point, there have been no confirmed cases of these vulnerabilities being exploited to cause harm” and asking them to take advised measures to ensure safety. Itkin told The Verge, “due to the complexity of the protocol, we do believe that other vendors might be vulnerable as well, however, it depends on their respective implementation”. Though Itkin said he worked only with the Canon model, he also said DSLRs of other companies may also be at high risk. Vulnerability discovery by Itkin’s team in Canon’s DSLR After Itkin’s team was successful in dumping the camera’s firmware and loading it into their disassembler (IDA Pro), they say finding the PTP layer was an easy task. This is because, The PTP layer is command-based, and every command has a unique numeric opcode. The firmware contains many indicative strings, which eases the task of reverse-engineering it. Next, the team traversed back from the PTP OpenSession handler and found the main function that registers all of the PTP handlers according to their opcodes. “When looking on the registration function, we realized that the PTP layer is a promising attack surface. The function registers 148 different handlers, pointing to the fact that the vendor supports many proprietary commands. With almost 150 different commands implemented, the odds of finding a critical vulnerability in one of them is very high,” Itkin wrote in the research report. Each PTP command handler implements the same code API. The API makes use of the ptp_context object, an object that is partially documented thanks to ML, Itkin said. The team realized that most of the commands were relatively simple. “They receive only a few numeric arguments, as the protocol supports up to 5 such arguments for every command. After scanning all of the supported commands, the list of 148 commands was quickly narrowed down to 38 commands that receive an input buffer,” Itkin writes. “From an attacker’s viewpoint, we have full control of this input buffer, and therefore, we can start looking for vulnerabilities in this much smaller set of commands. Luckily for us, the parsing code for each command uses plain C code and is quite straight-forward to analyze,” he further added. Following this, they were able to find their first vulnerabilities and then the rest. Check Point and Canon have advised users to ensure that their cameras are using the latest firmware and install patches whenever they become available. Also, if the device is not in use camera owners should keep the device’s Wi-Fi turned off. A user on HackerNews points out, “It could get even worse if the perpetrator instead of bricking the device decides to install a backdoor that silently uploads photos to a server whenever a wifi connection is established.” Another user on Petapixel explained what quick measures they should take,  “A custom firmware can close the vulnerability also if they put in the work. Just turn off wifi and don't use random computers in grungy cafes to connect to your USB port and you should be fine. It may or may not happen but it leaves the door open for awesome custom firmware to show up. Easy ones are real CLOG for 1dx2. For the 5D4, I would imagine 24fps HDR, higher res 120fps, and free Canon Log for starters. For non tech savvy people that just leave wifi on all the time, that visit high traffic touristy photo landmarks they should update. Especially if they have no interest in custom firmware.” Another user on Petapixel highlighted the fact, “this hack relies on a serious number of things to be in play before it works, there is no mention of how to get the camera working again, is it just a case of flashing the firmware and accepting you may have lost a few images ?... there’s a lot more things to worry about than this.” Check Point has demonstrated the entire attack in the following YouTube video. https://youtu.be/75fVog7MKgg To know more about this news in detail, read Eyal Itkin’s complete research on Check Point. Researchers reveal vulnerability that can bypass payment limits in contactless Visa card Apple patched vulnerability in Mac’s Zoom Client; plans to address ‘video on by default’ VLC media player affected by a major vulnerability in a 3rd library, libebml
Read more
  • 0
  • 0
  • 28869

article-image-opentracing-and-opencensus-merge-into-opentelemetry-project-google-introduces-opencensus-web
Sugandha Lahoti
13 Aug 2019
4 min read
Save for later

OpenTracing and OpenCensus merge into OpenTelemetry project; Google introduces OpenCensus Web

Sugandha Lahoti
13 Aug 2019
4 min read
Google has introduced an extension of OpenCensus called the OpenCensus Web which is a library for collecting application performance and behavior monitoring data of web pages. This library focuses on the frontend web application code that executes in the browser allowing it to collect user-side performance data. It is still in alpha stage with the API subject to change. This is great news for websites that are heavy by nature, such as media-driven pages like Instagram, Facebook, YouTube, and Amazon, and WebApps. OpenCensus Web interacts with three application components, the Frontend web server, the Browser JS, and the OpenCensus Agent. The agent receives traces from the frontend web server proxy endpoint or directly from the browser JS, and exports them to a trace backend. Features of OpenCensus Web OpenCensus Web traces spans for initial load including server-side HTML rendering The OpenCensus Web spans also includes detailed annotations for DOM load events as well as network events It automatically traces all the click events as long as the click is done in a DOM element and it is not disabled OC Web traces route transitions between the different sections of your page by monkey-patching the History API It allows users to create custom spans for their web application for tasks or code involved in user interaction It performs automatic spans for HTTP requests and browser performance data OC web relates user interactions back to the initial page load tracing. Along with this release, the OpenCensus family of projects is merging with OpenTracing into OpenTelemetry. This means all of the OpenCensus community will be moving over to OpenTelemetry, Google and Omnition included. OpenCensus Web’s functionality will be migrated into OpenTelemetry JS once this project is ready. Omnition founder wrote on Hacker News, “Although Google will be heavily involved in both the client libraries and agent development, Omnition, Microsoft, and others will also be major contributors.” Another comment on Hacker News, explains the merger more in detail. “OpenCensus is a Google project to standardize metrics and distributed tracing. It's an API spec and libraries for various languages with varying backend support. OpenTracing is a CNCF project as an API for distributed tracing with a separate project called OpenMetrics for the metrics API. Neither include libraries and rely on the community to provide them.  The industry decided for once that we don't need all this competing work and is consolidating everything into OpenTelemetry that combines an API for tracing and metrics along with libraries. Logs (the 3rd part of observability) are in the planning phase.  OpenCensus Web is bringing the tracing/metrics part to your frontend JS so you can measure how your webapp works in addition to your backend apps and services.” By September 2019, OpenTelemetry plans to reach parity with existing projects for C#, Golang, Java, NodeJS, and Python. When each language reaches parity, the corresponding OpenTracing and OpenCensus projects will be sunset (old projects will be frozen, but the new project will continue to support existing instrumentation for two years, via a backwards compatibility bridge). Read more on the OpenTelemetry roadmap. Public reaction for OpenCensus Web has been positive. People have expressed their opinions on a Hacker News thread. “This is great, as the title says, this means that web applications can now have tracing across the whole stack, all within the same platform.” “I am also glad to know that the merge between OpenTracing and OpenCensus is still going well. I started adding telemetry to the projects I maintain in my current job and so far it has been very helpful to detect not only bottlenecks in the operations but also sudden spikes in the network traffic since we depend on so many 3rd-party web API that we have no control over. Thank you OpenCensus team for providing me with the tools to learn more.” For more information about OpenCensus Web, visit Google’s blog. CNCF Sandbox, the home for evolving cloud-native projects, accepts Google’s OpenMetrics Project Google open sources ClusterFuzz, a scalable fuzzing tool Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard
Read more
  • 0
  • 0
  • 20725

article-image-lukasz-langa-at-pylondinium19-if-python-stays-synonymous-with-cpython-for-too-long-well-be-in-big-trouble
Sugandha Lahoti
13 Aug 2019
7 min read
Save for later

Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”

Sugandha Lahoti
13 Aug 2019
7 min read
PyLondinium, the conference for Python developers was held in London, from the 14th to the 16th of June, 2019. At the Sunday Keynote Łukasz Langa, the creator of Black (Python code formatter) and Python 3.8 release manager spoke on where Python could be in 2020 and how Python developers should try new browser and mobile-friendly versions of Python. Python is an extremely expressive language, says Łukasz. “When I first started I was amazed how much you can accomplish with just a few lines of code especially compared to Java. But there are still languages that are even more expressive and enables even more compact notation.” So what makes Python special? Python is run above pseudocode; it reads like English; it is very elegant. “Our responsibility as developers,” Łukasz mentions “is to make Python’s runnable pseudocode convenient to use for new programmers.” Python has gotten much bigger, stable and more complex in the last decade. However, the most low-hanging fruit, Łukasz says, has already been picked up and what's left is the maintenance of an increasingly fossilizing interpreter and a stunted library. This maintenance is both tedious and tricky especially for a dynamic interpreter language like Python. Python being a community-run project is both a blessing and a curse Łukasz talks about how Python is the biggest community ran programming language on the planet. Other programming languages with similar or larger market penetration are either run by single corporations or multiple committees. Being a community project is both a blessing and a curse for Python, says Łukasz. It's a blessing because it's truly free from shareholder pressure and market swing. It’s a curse because almost the entire core developer team is volunteering their time and effort for free and the Python Software Foundation is graciously funding infrastructure and events; it does not currently employ any core developers. Since there is both Python and software right in the name of the foundation, Lukasz says he wants it to change. “If you don't pay people, you have no influence over what they work on. Core developers often choose problems to tackle based on what inspires them personally. So we never had an explicit roadmap on where Python should go and what problems or developers should focus on,” he adds. Python is no longer governed by a BDFL says Łukasz, “My personal hope is that the steering council will be providing visionary guidance from now on and will present us with an explicit roadmap on where we should go.” Interesting and dead projects in Python Łukasz talked about mypyc and invited people to work and contribute to this project as well as organizations to sponsor it. Mypyc is a compiler that compiles mypy-annotated, statically typed Python modules into CPython C extensions. This restricts the Python language to enable compilation. Mypyc supports a subset of Python. He also mentioned MicroPython, which is a Kickstarter-funded subset of Python optimized to run on microcontrollers and other constrained environments. It is a compatible runtime for microcontrollers that has very little memory- 16 kilobytes of RAM and 256 kilobytes for code memory and minimal computing power. He also talks about micro:bit. He also mentions many dead/dying/defunct projects for alternative Python interpreters, including Unladen Swallow, Pyston, IronPython. He talked about PyPy - the JIT Python compiler written in Python. Łukasz mentions that since it is written in Python 2, it makes it the most complex applications written in the industry. “This is at risk at the moment,” says Łukasz “since it’s a large Python 2 codebase needs updating to Python 3. Without a tremendous investment, it is very unlikely to ever migrate to Python 3.” Also, trying to replicate CPython quirks and bugs requires a lot of effort. Python should be aligned with where developer trends are shifting Łukasz believes that a stronger division between language and the reference implementation is important in case of Python. He declared, “If Python stays synonymous with CPython for too long, we’ll be in big trouble.” This is because CPython is not available where developer trends are shifting. For the web, the lingua franca is JavaScript now. For the two biggest operating systems on mobile, there is Swift the modern take on Objective C and Kotlin, the modern take on Java. For VR AR and 3D games, there is C# provided by Unity. While Python is growing fast, it’s not winning ground in two big areas: the browser, and mobile. Python is also slowly losing ground in the field of systems orchestration where Go is gaining traction. He adds, “if there were not the rise of machine learning and artificial intelligence, Python would have not survived the transition between Python 2 and Python 3.” Łukasz mentions how providing a clear supported and official option for the client-side web is what Python needs in order to satisfy the legion of people that want to use it.  He says, “for Python, the programming language to need to reach new heights we need a new kind of Python. One that caters to where developer trends are shifting - mobile, web, VR, AR, and 3D games. There should be more projects experimenting with Python for these platforms. This especially means trying restricted versions of the language because they are easier to optimize. We need a Python compiler for Web and Python on Mobile Łukasz talked about the need to shift to where developer trends are shifting. He says we need a Python compiler for the web - something that compiles your Python code to the web platform directly. He also adds, that to be viable for professional production use, Python on the web must not be orders of magnitude slower than the default option (Javascript) which is already better supported and has better documentation and training. Similarly, for mobile he wants a small Python application so that websites run fast and have quick user interactions. He gives the example of the Go programming language stating how “one of Go’s claims to fame is the fact that they shipped static binaries so you only have one file. You can choose to still use containers but it’s not necessary; you don't have virtual ends, you don't have pip installs, and you don't have environments that you have to orchestrate.” Łukasz further adds how the areas of modern focus where Python currently has no penetration don't require full compatibility with CPython. Starting out with a familiar subset of Python for the user that looks like Python would simplify the development of a new runtime or compiler a lot and potentially would even fit the target platform better. What if I want to work on CPython? Łukasz says that developers can still work on CPython if they want to. “I'm not saying that CPython is a dead end; it will forever be an important runtime for Python. New people are still both welcome and needed in fact. However, working on CPython today is different from working on it ten years ago; the runtime is mission-critical in many industries which is why developers must be extremely careful.” Łukasz sums his talk by declaring, “I strongly believe that enabling Python on new platforms is an important job. I'm not saying Python as the entire programming language should just abandon what it is now. I would prefer for us to be able to keep Python exactly as it is and just move it to all new platforms. Albeit, it is not possible without multi-million dollar investments over many years.” The talk was well appreciated by Twitter users with people lauding it as ‘fantastic’ and ‘enlightening’. https://twitter.com/WillingCarol/status/1156411772472971264 https://twitter.com/freakboy3742/status/1156365742435995648 https://twitter.com/jezdez/status/1156584209366081536 You can watch the full Keynote on YouTube. NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust
Read more
  • 0
  • 0
  • 16543

article-image-at-defcon-27-darpas-10-million-voting-system-could-not-be-hacked-by-voting-village-hackers-due-to-a-bug
Savia Lobo
12 Aug 2019
4 min read
Save for later

At DefCon 27, DARPA's $10 million voting system could not be hacked by Voting Village hackers due to a bug

Savia Lobo
12 Aug 2019
4 min read
At the DefCon security conference in Las Vegas, for the last two years, hackers have come to the Voting Village every year to scrutinize voting machines and analyze them for vulnerabilities. This year, at DefCon 27, the targeted voting machine included a $10 million project by DARPA (Defense Advanced Research Projects Agency). However, hackers were unable to break into the system, not because of robust security features, but due to technical difficulties during the setup. “A bug in the machines didn't allow hackers to access their systems over the first two days,” CNet reports. DARPA announced this voting system in March, this year, hoping that it “will be impervious to hacking”. The system will be designed by the Oregon-based verifiable systems firm, Galois. “The agency hopes to use voting machines as a model system for developing a secure hardware platform—meaning that the group is designing all the chips that go into a computer from the ground up, and isn’t using proprietary components from companies like Intel or AMD,” Wired reports. Linton Salmon, the project’s program manager at Darpa says, “The goal of the program is to develop these tools to provide security against hardware vulnerabilities. Our goal is to protect against remote attacks.” Voting Village's co-founder Harri Hursti said, the five machines brought in by Galois, “seemed to have had a myriad of different kinds of problems. Unfortunately, when you're pushing the envelope on technology, these kinds of things happen." “The Darpa machines are prototypes, currently running on virtualized versions of the hardware platforms they will eventually use.” However, at Voting Village 2020, Darpa plans to include complete systems for hackers to access. Dan Zimmerman, principal researcher at Galois said, “All of this is here for people to poke at. I don’t think anyone has found any bugs or issues yet, but we want people to find things. We’re going to make a small board solely for the purpose of letting people test the secure hardware in their homes and classrooms and we’ll release that.” Sen. Wyden says if voting system security standards fail to change, the consequences will be much worse than 2016 elections After the cyberattacks in the 2016 U.S. presidential elections, there is a higher risk of securing voters data in the upcoming presidential elections next year. Senator Ron Wyden said if the voting system security standards fail to change, the consequences could be far worse than the 2016 elections. In his speech on Friday at the Voting Village, Wyden said, "If nothing happens, the kind of interference we will see form hostile foreign actors will make 2016 look like child's play. We're just not prepared, not even close, to stop it." Wyden proposed an election security bill requiring paper ballots in 2018. However, the bill was blocked in the Senate by Majority Leader Mitch McConnell who called the bill a partisan legislation. On Friday, a furious Wyden held McConnell responsible calling him the reason why Congress hasn't been able to fix election security issues. "It sure seems like Russia's No. 1 ally in compromising American election security is Mitch McConnell," Wyden said. https://twitter.com/ericgeller/status/1159929940533321728 According to a security researcher, the voting system has a terrible software vulnerability Dan Wallach, a security researcher at Rice University in Houston, Texas told Wired, “There’s a terrible software vulnerability in there. I know because I wrote it. It’s a web server that anyone can connect to and read/write arbitrary memory. That’s so bad. But the idea is that even with that in there, an attacker still won’t be able to get to things like crypto keys or anything really. All they would be able to do right now is crash the system.” According to CNet, “While the voting process worked, the machines weren't able to connect with external devices, which hackers would need in order to test for vulnerabilities. One machine couldn't connect to any networks, while another had a test suite that didn't run, and a third machine couldn't get online.” The machine's prototype allows people to vote with a touchscreen, print out their ballot and insert it into the verification machine, which ensures that votes are valid through a security scan. According to Wired, Galois even added vulnerabilities on purpose to see how its system defended against flaws. https://twitter.com/VotingVillageDC/status/1160663776884154369 To know more about this news in detail, head over to Wired report. DARPA plans to develop a communication platform similar to WhatsApp DARPA’s $2 Billion ‘AI Next’ campaign includes a Next-Generation Nonsurgical Neurotechnology (N3) program Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more!
Read more
  • 0
  • 0
  • 16171
article-image-how-data-privacy-awareness-is-changing-how-companies-do-business
Guest Contributor
09 Aug 2019
7 min read
Save for later

How Data Privacy awareness is changing how companies do business

Guest Contributor
09 Aug 2019
7 min read
Not so long ago, data privacy was a relatively small part of business operations at some companies. They paid attention to it to a minor degree, but it was not a focal point or prime area of concern. That's all changing now as businesses now recognize that failing to take privacy seriously harms the bottom line. That revelation changes how they operate and engage with customers. One of the reasons for this change is the General Data Protection Regulation (GDPR) rule which now affects all European Union companies and those that do business with EU residents. Some analysts viewed regulators as slow to begin enforcing GDPR with fines, but some of them imposed in 2019 total more than $100 million. In 2018, Twitter and Nielsen cited the GDPR as a reason for their falling share prices. No Single Way to Demonstrate Data Privacy Awareness One essential thing for companies to keep in mind is that there is not an all-encompassing way to show customers they emphasize data security. Although security and privacy are distinct, they are closely related to and impact each other. That's because what privacy awareness means differs depending on how a business operates. For example, a business might collect data from customers and feed it back to them through an analytics platform. In this case, showing data privacy awareness might mean publishing a policy that mentions how the company will never sell a person's information to others. For an e-commerce company, emphasizing on a commitment to keep customer information secure might mean going into details about how it protects sensitive data such as credit card numbers. It might also talk about internal strategies used to keep customer information as safe as possible from cybercriminals. One universal aspect of data privacy awareness is that it makes good business sense. The public is now much more aware of data privacy issues than in past years, and that's largely due to the high-profile breaches that capture the headlines. Lost customers, gigantic fines and damaged reputations after Data breaches and misuse When companies don't invest in data privacy measures, they could be victimized by severe data breaches. If that happens,  ramifications are often substantial. A 2019 study from PCI Pal surveyed customers in the United States and the United Kingdom to determine how their perceptions and spending habits changed following data breaches. It found that 41% of United Kingdom customers and 21% of people in the U.S. stop spending money at business forever if it suffers a data breach. The more common action is for consumers to stop spending money at breached businesses for several months afterward, the poll revealed. In total, 62% of Americans and 44% of Brits said they’d take that approach. However, that's not the only potential hit to a company's profitability. As the Facebook example mentioned earlier indicates, there can also be massive fines. Two other recent examples involve the British Airways and Marriott Hotels breaches. A data regulatory body in the United Kingdom imposed the largest-ever data breach fine on British Airways after a 2018 hack, with the penalty totaling £183 million — more than $228 million. Then, that same authority gave Marriott Hotels the equivalent of a $125 million fine for its incident, alleging inadequate cybersecurity and data privacy due diligence. These enormous fines don't only happen in the United Kingdom. Besides its recent decision with Facebook, the U.S. Federal Trade Commission (FTC) reached a settlement with Equifax that required the company to pay $700 million after its now-infamous data breach. It's easy to see why losing customers after such issues could make such substantial fines even more painful for the companies that have to pay them. The FTC also investigated Facebook’s Cambridge Analytica scandal and handed the company a $5 billion fine for failing to adequately protect customer data — the largest imposed by the FTC. Problems also occur if companies misuse data. Take the example of a class-action lawsuit filed against AT&T. The telecom giant and a couple of data aggregation enterprises allegedly permitted third-party companies to access individuals' real-time locations via mobile phone data. Those companies didn't check first to see if the customers allowed such access. Such news could bring about irreparable reputational damage and make people hesitate to do business. Expecting customers to read privacy policies is not sufficient Companies rely on both back-end and customer-facing strategies to meet their data security goals and earn customer trust. Some businesses go beyond the norm by taking the time to publish sections on their websites that detail how their infrastructure supports data privacy. They discuss the implementation of things like multi-layered data access authorization framework, physical access controls for server rooms and data encryption at rest and in transit. But, one of the more prominent customer-facing declarations of a company’s commitment to keeping data secure is the privacy policy, now a fixture of modern websites. Companies cannot bypass publishing their privacy policies, of course. However, most people don't take the time to read those documents. An Axios/Survey Monkey poll spotlighted a disconnect between respondents' beliefs and actions. It found that although 87% of them felt it was either somewhat or very important to understand a company's privacy policy before signing up for something, 56% of them always or usually agree to it without reading it. More research on the subject by Varonis found that it can take nearly half an hour to read some privacy policies. That reading level got more advanced after the GDPR came into effect. Together, these studies illustrate that companies need to go beyond anticipating that customers will read what privacy policies say. Moreover, they should work hard to make them shorter and easier for people to understand. Most people want companies to take a stand for Data Privacy A study of 1,000 people conducted in the United Kingdom supported the earlier finding from Gemalto where people thought the companies holding their data were responsible for maintaining its security. It concluded that customers felt it was "highly important" for businesses to take a stand for information security and privacy, and that 53% expected firms to do so. Moreover, the results of a CIGI-Ipsos worldwide survey said that 53% of those polled were more concerned about online privacy now compared to a year ago. Additionally, 49% said their rising distrust of the internet made them provide less information online. Companies must show they care about data privacy and work that aspect into their business strategies. Otherwise, they could find that customers leave them in favor of more privacy-centric organizations. To get an idea of what can happen when companies have data privacy blunders, people only need to look at how Facebook users responded in the Cambridge Analytica aftermath. Statistics published by the Pew Research Center showed that 54% of adults changed their privacy settings in the past year, while approximately a quarter stopped using the site. After the news broke about Facebook and Cambridge Analytica, many media outlets reminded people that they could download all the data Facebook had about them. The Pew Research Center found that although only 9% of its respondents took that step, 47% of the people in that group removed the app from their phones. Data Privacy is a Top-of-Mind concern The studies and examples mentioned here strongly suggest consumers are no longer willing to accept the possible wrongful treatment of their data. They increasingly hold companies accountable and don't show forgiveness if they don't meet their privacy expectations. The most forward-thinking companies see this change and respond accordingly. Those that choose inaction instead are at risk of losing out. Individuals understand that companies value their data, but they aren't willing to part with it freely unless companies convey trustworthiness first. Author Bio Kayla Matthews writes about big data, cybersecurity, and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com. Facebook fails to block ECJ data security case from proceeding ICO to fine Marriott over $124 million for compromising 383 million users’ data. Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content
Read more
  • 0
  • 0
  • 13787

article-image-black-hat-usa-2019-conference-highlights-ibms-warshipping-os-threat-intelligence-bots-apples-1m-bug-bounty-programs-and-much-more
Savia Lobo
09 Aug 2019
9 min read
Save for later

Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more!

Savia Lobo
09 Aug 2019
9 min read
The popular Black Hat USA 2019 conference was held from August 3 - August 8 at Las Vegas. The conference included technical training sessions conducted by international industry and subject matter experts to provide hands-on offensive and defensive skill-building opportunities. It also included briefings from security experts who shared their latest findings, open-source tools, zero-day exploits, and more. Tech giants including Apple, IBM, Microsoft made some interesting announcements such as Apple and Microsoft expanding their bug-bounty programs, with IBM launching a new ‘warshipping’ hack, and much more. Black Hat USA 2019 also launched many interesting open-source tools and products like Scapy, a Python-based Interactive packet manipulation Program, CyBot, an open-Source threat intelligence chatbot, any many other products. Apple, IBM, and Microsoft announcements at Black Hat USA 2019 Apple expands its bug bounty program; announces new iOS ‘security research device program’ Ivan Krstić, Apple’s head of security engineering, announced that Apple is expanding its bug bounty program by making it available for all security researchers in general. Previously, the bug bounty program was open only for those on the company’s invite-only list and the reward prize was $200,000. Following this announcement, a reward up to $1 million will be awarded to those who find vulnerabilities in Apple’s iPhones and Macs. Krstić also said that next year, Apple will be providing special iPhones to security researchers to help them find security flaws in iOS. To know more about this news in detail, head over to our complete coverage. IBM’s X-Force Red team announces new ‘warshipping’ hack to infiltrate corporate networks IBM’s offensive security team, X-Force Red announced a new attack technique nicknamed "warshipping". According to Forbes, “When you cruise a neighborhood scouting for Wi-Fi networks, warshipping allows a hacker to remotely infiltrate corporate networks by simply hiding inside a package a remote-controlled scanning device designed to penetrate the wireless network–of a company or the CEO's home–and report back to the sender.” Charles Henderson, head of IBM X-Force Red said, “Think of the volume of boxes moving through a corporate mailroom daily. Or consider the packages dropped off on the porch of a CEO’s home, sitting within range of their home Wi-Fi. Using warshipping, X-Force Red was able to infiltrate corporate networks undetected.” To demonstrate this approach, the X-Force team built a low-power gizmo consisting of a $100 single-board computer with built-in 3G and Wi-Fi connectivity and GPS. It’s smaller than the palm of your hand, and can be hidden in a package sent out for delivery to a target’s business or home. To know more about this announcement, head over to Forbes. Microsoft adds $300,000 to its Azure bounty program For anyone who can successfully hack Microsoft’s public-cloud infrastructure service, the company has increased the bug bounty reward by adding $300,000. Kymberlee Price, a Microsoft security manager, said, “To make it easier for security researchers to confidently and aggressively test Azure, we are inviting a select group of talented individuals to come and do their worst to emulate criminal hackers.” Further to avoid causing any disruptions to its corporate customers, Microsoft has also set up a dedicated customer-safe cloud environment, Azure Security Lab, which is a set of dedicated cloud hosts— similar to a sandbox environment and totally isolated from Azure customers—for security researchers to test attacks against Microsoft’s cloud infrastructure. To know more about this announcement in detail, head over to Microsoft’s official post. Some open-source tools and products launched at Black Hat USA 2019 Scapy: Python-Based Interactive Packet Manipulation Program + Library Scapy is a powerful Python-based interactive packet manipulation program and library. Scapy can be used to forge or decode packets of a wide number of protocols and send them on the wire, capture them, store or read them using pcap files, match requests and replies, and much more. Scapy can easily handle most classical tasks like scanning, tracerouting, probing, unit tests, attacks or network discovery. It also performs well at a lot of other specific tasks that most other tools can't handle, like sending invalid frames, injecting your own 802.11 frames, combining techniques (VLAN hopping+ARP cache poisoning, VoIP decoding on WEP protected channel, ...), etc. CyBot: Open-Source Threat Intelligence Chat Bot The goal to create Cybot was “to create a repeatable process using a completely free open source framework, an inexpensive Raspberry Pi (or even virtual machine), and host a community-driven plugin framework to open up the world of threat intel chatbots to everyone from the average home user to the largest security operations center”, the speaker Tony Lee, highlights. Cybot first debuted at Black Hat Arsenal Vegas 2017 and was also taken to Black Hat Europe and Asia to gather more great feedback and ideas from an enthusiastic international crowd. The feedback helped researchers to enhance and provide a platform upgrade to Cybot. Now, you can build your own Cybot within an hour with anywhere from  $0-$35 in expenses. Azucar: Multi-Threaded Plugin-Based Tool to Help Assess the Security of Azure Cloud Environment Subscription Azucar is a multi-threaded plugin-based tool to help assess the security of Azure Cloud environment subscription. By leveraging the Azure API, Azucar automatically gathers a variety of configuration data and analyses all data relating to a particular subscription in order to determine security risks. EXPLIoT: IoT Security Testing and Exploitation Framework EXPLIoT, developed in Python 3, is a framework for security testing and exploiting IoT products and IoT infrastructure. It includes a set of plugins (test cases) which are used to perform the assessment and can be extended easily with new ones. It can be used as a standalone tool for IoT security testing and more interestingly, it provides building blocks for writing new plugins/exploits and other IoT security assessment test cases with ease. EXPLIoT supports most IoT communication protocols, hardware interfacing functionality and test cases that can be used from within the framework to quickly map and exploit an IoT product or IoT Infrastructure. PyRDP: Python 3 Remote Desktop Protocol Man-in-the-Middle (MITM) and Library PyRDP is an RDP man-in-the-middle tool that has applications in pentesting and malware research. In pentesting, PyRDP has a number of features that allow attackers to compromise RDP sessions when combined with TCP man-in-the-middle solutions. On the malware research side, PyRDP can be used as part of a fully interactive honeypot. It can be placed in front of a Windows RDP server to intercept malicious sessions. It has the ability to replace the credentials provided in the connection sequence with working credentials to accelerate compromise and malicious behavior collection. MoP: Master of Puppets - Open Source Super Scalable Advanced Malware Tracking Framework for Reverse Engineers MoP ("Master of Puppets") is an open-source framework for reverse engineers who want to create and operate trackers for new malware found for research. MoP ships with a variety of workstation simulation capabilities, such as fake filesystem manager and fake process manager, multi-worker orchestration, TOR integration and more, all aiming to deceive adversaries into interacting with a simulated environment and possibly drop new unique samples. “Since everything is done in pure python, no virtual machines or Docker containers are needed and no actual malicious code is executed, all of which enables us to scale up in a click of a button, connecting to potentially thousands of different malicious servers at once from a single instance running on a single laptop.” Commando VM 2.0: Security Distribution for Penetration Testers and Red Teamers Commando VM is an open-source Windows-based security distribution designed for Penetration Testers and Red Teamers. It is an add-on from FireEye's very successful Reverse Engineering distribution: FLARE VM. Similar to Kali Linux, Commando VM is designed with an arsenal of open-source offensive tools that will help operators achieve assessment objectives. Built on Windows, Commando VM comes with all the native support for accessing Active Directory environments. Commando VM also includes: Web application assessment tools Scripting languages (such as Python and Go) Information Gathering tools (such as Nmap, WireShark, and PowerView) Exploitation Tools (such as PowerSploit, GhostPack and Mimikatz) Persistence tools, Lateral Movement tools, Evasion tools, Post-Exploitation tools (such as FireEye's SessionGopher), Remote Access tools, Command-Line tools, and all the might of FLARE VM's reversing tools. Commando VM 1.0 debuted at Black Hat Asia in Singapore this year and less than two weeks after release its “GitHub repository had over 2000 followers and over 400 forks”. BLACKPHENIX: Malware Analysis + Automation Framework BLACKPHENIX framework performs an Intelligent automation and analysis by combining all the known malware analysis approaches, automating the time-consuming stages and counter-attacking malware behavioral patterns. The objective of this framework is to generate precise IOCs by revealing the real malware purpose and exposing its hidden data and related functionalities that are used to exfiltrate or compromise user information. This framework focuses on consolidating, correlating, and cross-referencing the data collected between analysis stages by the execution of Python scripts and helper modules, providing full synchronization between the debugger, disassembler, and supporting components. AutoMacTC: Finding Worms in Apple Orchards - Using AutoMacTC for macOS Incident Response AutoMacTC is an open-source Python framework that can be quickly deployed to gather forensic data on macOS devices, from the artifacts that matter most to you and your investigation. The speakers Kshitij Kumar and Jai Musunuri say, “Performing forensic imaging and deep-dive analysis can be incredibly time-consuming and induce data fatigue in analysts, who may only need a select number of artifacts to identify leads and start finding answers. The resources-to-payoff ratio is impractical.” AutoMacTC captures sufficient data into a singular location, equipping responders with all of the above. To know about other open-source products in detail, head over to the Arsenal section. Black Hat USA 2019 also hosted a number of training sessions for cybersecurity developers, pentesters, and other security enthusiasts. To know more about the entire conference in detail, head over to Black Hat USA 2019 official website. Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Apple plans to suspend Siri response grading process due to privacy issues Apple Card, iPhone’s new payment system, is now available for select users
Read more
  • 0
  • 0
  • 24089

article-image-cncf-led-open-source-kubernetes-security-audit-reveals-37-flaws-in-kubernetes-cluster-recommendations-proposed
Vincy Davis
09 Aug 2019
7 min read
Save for later

CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed

Vincy Davis
09 Aug 2019
7 min read
Last year, the Cloud Native Computing Foundation (CNCF) initiated a process of conducting third-party security audits for its own projects. The aim of these security audits was to improve the overall security of the CNCF ecosystem. CoreDNS, Envoy and Prometheus are some of the CNCF projects which underwent these audits, resulting in identification of several security issues and vulnerabilities in the projects. With the help of the audit results, CoreDNS, Envoy and Prometheus addressed their security issues and later, provided users with documentation for the same. CNCF CTO Chris Aniszczyk says “The main takeaway from these initial audits is that a public security audit is a great way to test the quality of an open source project along with its vulnerability management process and more importantly, how resilient the open source project’s security practices are.” He has also announced that, later this year, CNCF will initiate a bounty program for researchers who identify bugs and other cybersecurity shortcomings in their projects. After tasting initial success, CNCF formed a Security Audit Working Group to provide security audits to their graduated projects, using the funds provided by the CNCF community. CNCF’s graduated projects include Kubernetes, Envoy, Fluentd among others. Due to the complexity and wide scope of the project, the Working group appointed two firms called the Trail of Bits and Atredis Partners to perform Kubernetes security audits. Trail of Bits implements high-end security research to identify security vulnerabilities and reduce risk and strengthen the code. Similarly, Atredis Partners also does complex and research-driven security testing and consulting. Kubernetes security audit findings Three days ago, the Trail of Bits team released an assessment report called the Kubernetes Security Whitepaper, which includes all the key aspects of the Kubernetes attack surface and security architecture. It aims to empower administrators, operators, and developers to make better design and implementation decisions. The Security Whitepaper presents a list of potential threats to Kubernetes cluster. https://twitter.com/Atlas_Hugged/status/1158767960640479232 Kubernetes cluster vulnerabilities A Kubernetes cluster consists of several base components such as kubelet, kube-apiserver, kube-scheduler, kube-controller-manager, and a kube-apiserver storage backend. Components like controllers and schedulers in Kubernetes assist in networking, scheduling, or environment management. Once a base Kubernetes cluster is configured, the Kubernetes clusters are managed by operator-defined objects. These operator-defined objects are referred as abstractions, which represents the state of the Kubernetes cluster. To provide an easy way of configuration and portability, the abstractions also include the component-agnostic. This again increases the operational complexity of a Kubernetes cluster. Since Kubernetes is a large system with many functionalities, the security audit was conducted on selected eight components within the larger Kubernetes ecosystem: Kube-apiserver Etcd Kube-scheduler Kube-controller-manager Cloud-controller-manager Kubelet Kube-proxy Container Runtime The Trail of Bits team firstly identified three types of attackers within a Kubernetes cluster: External attackers (who did not have access to the cluster) Internal attackers (who had transited a trust boundary) Malicious Internal users (who abuse their privilege within the cluster) The security audits resulted in total 37 findings, including 5 high severity, 17 medium severity, 8  low severity and 7 informational in the access control, authentication, timing, and data validation of a Kubernetes cluster. Some of the findings include: Insecure TLS is in use by default Credentials are exposed in environment variables and command-line arguments Names of secrets are leaked in logs No certificate revocation seccomp is not enabled by default Recommendations for Kubernetes cluster administrators and developers The Trail of Bits team have proposed a list of best practices and guideline recommendations for cluster administrators and developers. Recommendations for cluster administrators Attribute Based Access Controls vs Role Based Access Controls: Role-Based Access Controls (RBAC) can be configured dynamically while a cluster is operational. In contrast, Attribute Based Access Control (ABAC) are static in nature. This increases the difficulty of ensuring proper deployment and enforcement of controls. RBAC best practices: Administrators are advised to test their RBAC policies to ensure that the policies defined on the cluster are backed by an appropriate component configuration and that the policies properly restrict behavior. Node-host configurations and permissions: Appropriate authentication and access controls should be in place for the cluster nodes as an attacker with network access can use Kubernetes components to compromise other nodes. Default settings and backwards compatibility: Kubernetes contains many default settings which negatively impact the security of a cluster. Hence, cluster operators and administrators must ensure that the component and workload settings are rapidly changed and redeployed, in case of a compromise or an update. Networking: Due to the complexity of Kubernetes networking, there are many recommendations for maintaining a secure network. Some of them include: proper segmentation, isolation rules of the underlying cluster hosts should be defined. An executing control-plane components host should be isolated to the greatest extent possible. Environment considerations: The security of a cluster’s operating environment should be addressed. If a cluster is hosted on a cloud provider, administrators should ensure that best-practice hardening rules are implemented. Logging and alerting: Centralized logging of both workload and cluster host logs is recommended to enable debugging and event reconstruction. Recommendations for developers Avoid hardcoding paths to dependencies: Developers are advised to be conservative and cautious when handling external paths. Users should be warned if a path was not found, and have an option to specify it through a configuration variable. File permissions checking: Kubernetes should provide users the ability to perform file permissions checking, and enable this feature by default. This will help prevent common file permissions misconfigurations and help promote more secure practices. Monitoring processes on Linux: A Linux process is uniquely identified in the user-space via a process identifier or PID. A PID will point to a given process as long as the process is alive. If it dies, the PID can be reused by another spawned process. Moving processes to a cgroup: While moving a given process to a less restricted cgroup, it is necessary to validate that the process is the correct process after performing the movement. Future cgroup considerations for Kubernetes: Both Kubernetes and the components it uses (runc, Docker) have no support for cgroups. Currently, it is not an issue, however, it would be good to track this topic as it might change in the future. Future process handling considerations for Kubernetes: Tracking and participating in the development of a processes (or threads) on Linux is highly recommended. Kubernetes security audit sets precedent for other open source projects By conducting security audits and open sourcing the findings, Kubernetes, a widely used container-orchestration system, is setting a great precedent to other projects. This shows Kubernetes’ interest in maintaining security in their ecosystem. Though the number of security flaws found in the audit may upset a Kubernetes developer, it surely assures them that Kubernetes is trying its best to stay ahead of potential attackers. The Security Whitepaper and the threat model, provided in the security audit is expected to be of great help to Kubernetes community members for future references. Developers have also appreciated CNCF for undertaking great efforts in securing the Kubernetes system. https://twitter.com/thekonginc/status/1159578833768501248 https://twitter.com/krisnova/status/1159656574584930304 https://twitter.com/zacharyschafer/status/1159658866931589125 To know more details about the security audit of Kubernetes, check out the Kubernetes Security Whitepaper. Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more! Introducing Ballista, a distributed compute platform based on Kubernetes and Rust CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure
Read more
  • 0
  • 0
  • 21475
article-image-julia-co-creator-jeff-bezanson-on-whats-wrong-with-julialang-and-how-to-tackle-issues-like-modularity-and-extension
Vincy Davis
08 Aug 2019
5 min read
Save for later

Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension

Vincy Davis
08 Aug 2019
5 min read
The Julia language, which has been touted as the new fastest-growing programming language, held its 6th Annual JuliaCon 2019, between July 22nd to 26th at Baltimore, USA. On the fourth day of the conference, the co-creator of Julia language and the co-founder of Julia computing, Jeff Bezanson, gave a talk explaining “What’s bad about Julia”. Firstly, Bezanson states a disclaimer that he’s mentioning only those bad things in Julia which he is currently aware of. Next, he begins by listing many popular issues with the programming language. What’s wrong with Julia Compiler latency: Compiler latency has been one of the high priority issues in Julia. It is a lot slower when compared to other languages like Python(~27x slower) or C( ~187x slower). Static compilation support: Of Course, Julia can be compiled. Unlike the language C which is compiled before execution, Julia is compiled at runtime. Thus Julia provides poor support for static compilation. Immutable arrays: Many developers have contributed immutable array packages, however,  many of these packages assume mutability by default, resulting in more work for users. Thus Julia users have been requesting better support for immutable arrays. Mutation issues: This is a common stumbling block for Julia developers as many complain that it is difficult to identify which package is safe to mutate. Array optimizations: To get good performance, Julia users have to use manually in-place operations to get high performance array code. Better traits: Users have been requesting more traits in Julia, to avoid the big unions of listing all the examples of a type, instead of adding a declaration. This has been a big issue in array code and linear algebra. Incomplete notations: Many codes in Julia have incomplete notations. For eg. N-d array Many members from the audience agreed with Bezanson’s list and appreciated his frank efforts in accepting the problems in Julia. In this talk, Bezanson opts to explore two not-so-popular Julia issues - modularity and extension. He says that these issues are weird and worrisome to even him. How to tackle modularity and extension issues in Julia A typical Julia module extends functions from another module. This helps users in composing many things and getting lots of new functionality for free. However, what if a user wants a separately compiled module, which would be completely sealed, predictable, and will need less  time to compile, like an isolated module. Bezanson starts illustrating how the two issues of modularity and extension can be avoided in Julia code. Firstly, he starts by using two unrelated packages, which can communicate to each other by using extension in another base package. This scenario, he states, is common when used in a core module, which requires few primitives like any type, int type, and others. The two packages in a core module are called Core.Compiler and base, with each having their own definitions. The two packages have some codes which are common among them, thus it requires the user to write the same code twice in both the packages, which Bezanson think is “fine”. The more intense problem, Bezanson says is the typeof present in the core module. As both these packages needs to define constructors for their own types, it is not possible to share these constructors. This means that, except for constructors, everything else is isolated among the two packages. He adds that, “In practice, it doesn’t really matter because the types are different, so they can be distinguished just fine, but I find it annoying that we can’t sort of isolate those method tables of the constructors. I find it kind of unsatisfying that there’s just this one exception.” Bezanson then explains how Types can be described using different representations and extensions. Later, Bezanson provides two rules to tackle method specificity issues in Julia. The first rule is to be more specific, i.e., if it is a strict subtype (<:,not==) of another signature. According to Bezanson, the second rule is that it cannot be avoided. If methods overlap in arguments and have no specificity relationship, then “users have to give an ambiguity error”. Bezanson says that thus users can be on the safer side and assume that things do overlap. Also, if two signatures are similar, “then it does not matter which signature is called”, adds Bezanson. Finally, after explaining all the workarounds with regard to the said issues, Bezanson concludes that “Julia is not that bad”. And states that the “Julia language could be alot better and the team is trying their best to tackle all the issues.” Watch the video below to check out all the illustrations demonstrated by Bezanson during his talk. https://www.youtube.com/watch?v=TPuJsgyu87U Julia users around the world have loved Bezanson’s honest and frank talk at the JuliaCon 2019. https://twitter.com/MoseGiordano/status/1154371462205231109 https://twitter.com/johnmyleswhite/status/1154726738292891648 Read More Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment Creating a basic Julia project for loading and saving data [Tutorial]
Read more
  • 0
  • 0
  • 24823

article-image-what-is-a-magecart-attack-and-how-can-you-protect-your-business
Guest Contributor
07 Aug 2019
5 min read
Save for later

What is a Magecart attack, and how can you protect your business?

Guest Contributor
07 Aug 2019
5 min read
Recently, British Airways was slapped with a $230M fine after attackers stole data from hundreds of thousands of its customers in a massive breach. The fine, the result of a GDPR prosecution, was issued after a 2018 Magecart attack. Attackers were able to insert around 22 lines of code into the airline’s website, allowing them to capture customer credit card numbers and other sensitive pieces of information. Magecart attacks have largely gone unnoticed within the security world in recent years in spite of the fact that the majority occur at eCommerce companies or other similar businesses that collect credit card information from customers. Magecart has also been responsible for significant damage, theft, and fraud across a variety of industries. According to a 2018 report conducted by RiskIQ and Flashpoint, at least 6,400 websites had been affected by Magecart as of November 2018. To safeguard against Magecart and protect your organization from web-based threats, there are a few things you should do: Understand how Magecart attacks happen There are two approaches hackers take when it comes to Magecart attacks; the first focuses on attacking the main website or application, while the second focuses on exploiting third-party tags. In both cases, the intent is to insert malicious JavaScript which can then skim data from HTML forms and send that data to servers controlled by the attackers. Users typically enter personal data — whether it’s for authentication, searching for information, or checking out with a credit card — into a website through an HTML form. Magecart attacks utilize JavaScript to monitor for this kind of sensitive data when it’s entered into specific form fields, such as a password, social security number, or a credit card number. They then make a copy of it and send the copy to a different server on the internet.  In the British Airways attack, for example, hackers inserted malicious code into the airline’s baggage claim subdomain, which appears to have been less secure than the main website. This code was referenced on the main website, which when run within the airline’s customers’ browsers, could skim credit card and other personal information. Get ahead of the confusion that surrounds the attacks Magecart attacks are very difficult for web teams to identify because they do not take place on the provider’s backend infrastructure, but instead within the visitor’s browser. This means data is transferred directly from the browser to malicious servers, without any interaction with the backend website server — the origin — needing to take place. As a result, auditing the backend infrastructure and code supporting website on a regular basis won’t stop attacks, because the issue is happening in the user’s browser which traditional auditing won't detect.  This means Magecart attacks can only be discovered when the company is alerted to credit card fraud or a client-side code review including all the third-party services takes place. Because of this, there are still many sites online today that hold malicious Magecart JavaScript within their pages leaking sensitive information. Restrict access to sensitive data There are a number of things your team can do to prevent Magecart attacks from threatening your website visitors. First, it’s beneficial if your team limits third-party code on sensitive pages. People tend to add third-party tags all over their websites, but consider if you really need that kind of functionality on high-security pages (like your checkout or login pages). Removing non-essential third-party tags like chat widgets and site surveys from sensitive pages limit your exposure to potentially malicious code.  Next, you should consider implementing content security policies (CSP). Web teams can build policies that dictate which domains can run code and send data on sensitive pages. While this approach requires ongoing maintenance, it’s one way to prevent malicious hackers from gaining access to visitors’ sensitive information. Another approach is to adopt a zero-trust strategy. Web teams can look to a third-party security service that allows creating a policy that, by default, blocks access to sensitive data entered in web forms or stored in cookies. Then the team should restrict access to this data to everyone except for a select set of vetted scripts. With these policies in place, if malicious skimming code does make it onto your site, it won’t be able to access any sensitive information, and alerts will let you know when a vendor’s code has been exploited. Magecart doesn’t need to destroy your brand. Web skimming attacks can be difficult to detect because they don’t attack core application infrastructure — they focus on the browser where protections are not in place. As such, many brands are confused about how to protect their customers. However, implementing a zero-trust approach, thinking critically about how many third-party tags your website really needs and limiting who is able to run code will help you keep your customer data safe. Author bio Peter is the VP of Technology at Instart. Previously, Peter was with Citrix, where he was senior director of product management and marketing for the XenClient product. Prior to that, he held a variety of pre-sales, web development, and IT admin roles, including five years at Marimba working with enterprise change management systems. Peter has a BA in Political Science with a minor in Computer Science from UCSD. British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach. A universal bypass tricks Cylance AI antivirus into accepting all top 10 Malware. An IoT worm Silex, developed by a 14 year old resulted in malware attack and took down 2000 devices  
Read more
  • 0
  • 0
  • 28837
Modal Close icon
Modal Close icon