Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Security

470 Articles
article-image-facebook-finds-no-evidence-that-hackers-accessed-third-party-apps-via-user-logins-from-last-weeks-security-breach
Natasha Mathur
04 Oct 2018
3 min read
Save for later

Facebook finds ‘no evidence that hackers accessed third party Apps via user logins’, from last week’s security breach

Natasha Mathur
04 Oct 2018
3 min read
Facebook revealed last Friday that a major security breach compromised 50 million user accounts on Facebook. The security attack not only affected user’s Facebook accounts but also impacted other accounts that were linked to Facebook. The hackers had exploited Facebook’s “View As” feature that lets people see what their own profile looks like to someone else. The hackers had stolen Facebook access tokens to hack into other user’s accounts. These tokens provide hackers with full control over victim’s account, including logging into third-party applications that use Facebook Login. “We wanted to provide an update on the security attack that we announced last week. We fixed the vulnerability and we reset the access tokens for a total of 90 million accounts — 50 million that had access tokens stolen and 40 million that were subject to a “View As” look-up in the last year” wrote Guy Rosen, VP of product management. Resetting the tokens required users to login into their Facebook accounts again as well as re-login into any accounts or apps that use Facebook. As far as questions about the effects of this attack on the apps that used Facebook are concerned, Facebook is yet to find any impact. “We have now analyzed our logs for all third-party apps installed or logged in during the attack we discovered last week. That investigation has so far found no evidence that the attackers accessed any apps using Facebook Login”, states the Facebook post. All the developers leveraging the official Facebook SDKs along with people checking the validity of their users’ access tokens were automatically protected, on resetting the access tokens. However, to be extra careful, Facebook is developing a tool which will allow developers to manually identify users of the apps affected by the security breach so that they can be logged out. This will also prove to be beneficial for all those developers who don’t leverage Facebook’s SDKs or who don’t regularly check whether Facebook access tokens are valid. Additionally, Facebook recommends that developers always use Facebook Login security best practices as a guideline. It recommends that a developer use Facebook’s official SDKs for Android, iOS, and JavaScript, as these automatically check the validity of access tokens. These also force a fresh login every time the tokens are reset by Facebook, thereby protecting users accounts. Another thing to keep in mind is that Facebook wants developers to use the Graph API. This keeps the information updated regularly and makes sure that users are logged out of apps in case they show any Facebook session as invalid. “Security is incredibly important to Facebook. We’re sorry that this attack happened — and we’ll continue to update people as we find out more” reads the post. For more information, check out the official announcement. How far will Facebook go to fix what it broke: Democracy, Trust, Reality Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers?
Read more
  • 0
  • 0
  • 9884

article-image-a-year-later-google-project-zero-still-finds-safari-vulnerable-to-dom-fuzzing-using-publicly-available-tools-to-write-exploits
Melisha Dsouza
05 Oct 2018
4 min read
Save for later

A year later, Google Project Zero still finds Safari vulnerable to DOM fuzzing using publicly available tools to write exploits

Melisha Dsouza
05 Oct 2018
4 min read
It's been a year since the Project zero team published the results of their research about the resilience of modern browsers against DOM fuzzing. They also published Domato, their DOM fuzzing tool that was used to find those bugs. The results of the research were astonishing since Apple Safari, or more specifically, WebKit (its DOM engine) did not fare well in this test. The team decided to revisit the project again using exactly the same methodology and exactly the same tools to see whether the browsers have managed to implement better security mechanisms. The Test Setup In the previous research, the fuzzing was initially done against WebKitGTK+ and then all the crashes were tested against Apple Safari running on a Mac. In this research, WebKitGTK+ version 2.20.2 was used. To improve the fuzzing process, a couple of custom changes were made to WebKitGTK+ . For instance: Building WebKitGTK+ with ASan (Address Sanitizer) is now possible Changed window.alert() implementation to immediately call the garbage collector instead of displaying a message window. Generally, when a DOM bug causes a crash, due to the multi-process nature of WebKit, only the web process would crash, but the main process would continue running. Code was added to crash the main process when the web process crashes The team created a custom target binary. Results Obtained After running the fuzzer for 100.000.000 iterations, the team discovered 9 unique bugs that were reported to Apple. The bugs are summarized in the table below. All of these bugs have been fixed at the time of release of this blog post.   Project Zero bug ID CVE Type Affected Safari 11.1.2 Older than 6 months Older than 1 year 1593 CVE-2018-4197 UAF YES YES NO 1594 CVE-2018-4318 UAF NO NO NO 1595 CVE-2018-4317 UAF NO YES NO 1596 CVE-2018-4314 UAF YES YES NO 1602 CVE-2018-4306 UAF YES YES NO 1603 CVE-2018-4312 UAF NO NO NO 1604 CVE-2018-4315 UAF YES YES NO 1609 CVE-2018-4323 UAF YES YES NO 1610 CVE-2018-4328 OOB read YES YES YES UAF = use-after-free. OOB = out-of-bounds Out of the 9 bugs found, 6 affected the release version of Apple Safari, directly affecting Safari users. While this is significantly less than the 17 bugs found a year ago, it is still a notable number, especially since the fuzzer has been public for a long time now. After the results were in, the team found that most of the bugs were sitting in the WebKit codebase for longer than 6 months, however, only 1 of them is older than 1 year. Also, the team notes that throughout the past year, their fuzzing process came up with 14 bugs but they cannot surely say if these bugs have been resolved or are still live. The Exploit performed on the bugs To prove that bugs like this can lead to a browser compromise, an exploit was written for one of them. Out of the 6 issues affecting the release version of Safari, the researchers selected the use-after-free issue to exploit. The details of this issue are well explained in Project Zero’s Blog post. The exploit was successfully tested on Mac OS 10.13.6 (build version 17G65). All the details of the exploit can be seen at bugs.chromium.org. An interesting aspect of this exploit is that, on Safari for Mac OS it could be written in a very "old-school' way due to lack of control flow mitigations on the platform. That being said, on the latest mobile hardware and in iOS 12, which was published after the exploit was already written, Apple introduced control flow mitigations by using Pointer Authentication Codes (PAC). The issues were reported to Apple between June 15 and July 2nd, 2018. On September 17th 2018, Apple published security advisories for iOS 12, tvOS 12 and Safari 12 which fixed all of the issues. Although the bugs were fixed at that time, the corresponding advisories did not initially mention them. The issues described in the blog post were only added to the advisories one week later, on September 24, 2018, when the security advisories for macOS Mojave 10.14 were also published. The researchers affirm that there were clear improvements in WebKit DOM when tested with Domato. However, the public fuzzer was still able to find a large number of bugs. This is worrying because if a public tool was able to find that many bugs, private tools can be even more effective in exploiting these bugs. To know more about this experiment, head over to Google Project Zero’s official Blog. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan
Read more
  • 0
  • 0
  • 9858

article-image-how-twitter-is-defending-against-the-silhouette-attack-that-discovers-user-identity
Savia Lobo
20 Sep 2018
5 min read
Save for later

How Twitter is defending against the Silhouette attack that discovers user identity

Savia Lobo
20 Sep 2018
5 min read
Twitter Inc. disclosed that it is learning to defend against a new cyber attack technique, Silhouette, that discovers the identity of logged-in twitter users. This issue was reported to Twitter first in December 2017 through their vulnerability rewards program by a group of researchers from Waseda University and NTT. The researchers submitted a draft of their paper for the IEEE European Symposium on Security and Privacy in April 2018. Following this, Twitter’s security team prioritized the issue and routed it to several relevant teams and also contacted several other at-risk sites and browser companies to urgently address the problem. The researchers too recognized the significance of the problem and formed a cross-functional squad to address it. The Silhouette attack This attack exploits variability during the time taken by web pages to load. This threat is established by exploiting a function called ‘user blocking’ that is widely adopted in (Social Web Services) SWSs. Here the malicious user can also control the visibility of pages from legitimate users. As a preliminary step, the malicious third party creates personal accounts within the target SWS (referred to below as “signaling accounts”) and uses these accounts to systematically block some users on the same service thereby constructing a combination of non-blocked/blocked users. This pattern can be used as information for uniquely identifying user accounts. At the time of identification execution, that is, when a user visits a website on which a script for identifying account names has been installed, that user will be forced to communicate with pages of each of those signaling accounts. This communication, however, is protected by the Same-Origin Policy*5, so the third party will not be able to directly obtain the content of a response from such a communication. The action taken against Silhouette attack The Waseda University and NTT researchers provided various ideas for mitigating the issue in their research paper. The ideal solution was to use the SameSite attribute for the twitter login cookies. This would mean that requests to Twitter from other sites would not be considered logged-in requests. If the requests aren't logged-in requests, identity can't be detected. However, this feature was an expired draft specification and it had only been implemented by Chrome. Although Chrome is one of biggest browser clients by usage, Twitter needed to cover other browsers as well. Hence, they decided to look into other options to mitigate this issue. Twitter decided to reduce the response size differences by loading a page shell and then loading all content with JavaScript using AJAX. Page-to-page navigation for the website already works this way. However, the server processing differences were still significant for the page shell, because the shell still needed to provide header information and those queries made a noticeable impact on response times. Twitter’s CSRF protection mechanism for POST requests checks if the origin and referer headers of the request are sourced from Twitter. This proved effective in addressing the vulnerability, but it prevented this initial load of the website. Users might load Twitter from a Google search result or by typing the URL into the browser. To address this case, Twitter created a blank page on their site which did nothing but reload itself. Upon reload, the referer would be set to twitter.com, and so it would load correctly. There is no way for non-Twitter sites to follow that reload. The blank page is super-small, so while a roundtrip load is incurred, it doesn't impact load times too much. With this solution, Twitter was able to apply it to various high-level web stacks. There were a bunch of other considerations twitter had to make. Some of them include: They supported a legacy version of Twitter (known internally as M2) that operates without the need for JavaScript. They also made sure that the reloading solution didn't require JavaScript. They made use of CSP for security to make sure that their blank reloading page followed Twitter’s own CSP rules, which can vary from service to service. Twitter needed to pass through the original HTTP referrer to make sure metrics were still accurately attributing search engine referrals. They had to make sure the page wasn't cached by the browser, or the blank page would reload itself indefinitely. Thus, they used cookies to detect those loops, showing a short friendly message and a manual link if the page appeared to be reloading more than once. Implementing the SameSite cookie on major browsers Although Twitter has implemented the mitigation, they have discussed this issue with other major browser vendors regarding the SameSite cookie attribute. All major browsers have now implemented SameSite cookie support. This includes Chrome, Firefox, Edge, Internet Explorer 11, and Safari. Rather than adding the attribute to Twitter’s existing login cookie, they added two new cookies for SameSite, to reduce the risk of logout should a browser or network issue corrupt the cookie when it encounters the SameSite attribute. Adding the SameSite attribute to a cookie is not at all time-consuming. One just needs to add "SameSite=lax" to the set-cookie HTTP header. However, Twitter's servers depend on Finagle, which is a wrapper around Netty, which does not support extensions to the Cookie object. As per a Twitter post, “When investigating, we were surprised to find a feature request from one of our own developers the year before! But because SameSite was not an approved part of the spec, there was no commitment from the Netty team to implement. Ultimately we managed to add an override into our implementation of Finagle to support the new cookie attribute.” Read more about this in detail on Twitter’s blog post. The much loved reverse chronological Twitter timeline is back as Twitter attempts to break the ‘filter bubble’ Building a Twitter news bot using Twitter API [Tutorial] Facebook, Twitter open up at Senate Intelligence hearing, the committee does ‘homework’ this time
Read more
  • 0
  • 0
  • 9857

article-image-philips-hues-second-ongoing-remote-connectivity-outage-infuriates-users
Savia Lobo
04 Jan 2019
2 min read
Save for later

Philips Hue’s second ongoing remote connectivity outage infuriates users

Savia Lobo
04 Jan 2019
2 min read
A day after Christmas, Philips Hue experienced an outage where customers were experiencing issues creating new accounts, logging in and linking their account to third parties. The company concluded that this was due to “a lot of new activations”. According to a TechCrunch post, “many people received Hue’s connected lighting products over the holidays and were now trying to set up their smart bulbs and other devices all around the same time. Hue’s servers couldn’t keep up with the demand and weren’t responding to the incoming requests”. This meant that users could not create or log into their MyHue account, or connect their lights to their Amazon Echo or Google Home. Philips Hue’s Twitter account didn’t make a public announcement about the outage until Dec 26. Instead, the company was only replying to individual users. https://twitter.com/tweethue/status/1077996790035689474 The company then tweeted that the issue preventing successful account setup and device linking was resolved. https://twitter.com/tweethue/status/1078415024908128259 Almost a week after the company claimed that the issue was resolved, the company tweeted that they were having an issue with remote connectivity (Out of Home, voice commands). The company said that they would resolve the issue soon. However, the local connection via Wi-Fi would not be affected, the company tweeted. https://twitter.com/tweethue/status/1080867645858164736 One of the users tweeted pointing out that the company chose Twitter to let the users know and not via a notification email. https://twitter.com/bigjonvtpa/status/1080928370655924224 The company, however, informed the users that this issue will be resolved soon. If not, they could also disconnect their bridge for 30 seconds or try again later. To know more about this news in detail, head over to Philips Hue’s twitter thread. CenturyLink suffers a major outage; affects 911 services across several states in the US Fortnite server suffered a minor outage, Epic Games was quick to address the issue Ericsson’s expired software certificate issue causes massive outages in UK’s O2 and Japan’s SoftBank network services
Read more
  • 0
  • 0
  • 9854

article-image-is-att-trying-to-twist-data-privacy-legislation-to-its-own-favor
Amarabha Banerjee
15 Oct 2018
4 min read
Save for later

Is AT&T trying to twist data privacy legislation to its own favor?

Amarabha Banerjee
15 Oct 2018
4 min read
On September 26th, U.S. Senator John Thune (R-S.D.), chairman of the Senate Committee on Commerce, Science, and Transportation, summoned a hearing titled ‘Examining Safeguards for Consumer Data Privacy’. Executives from AT&T, Amazon, Google, Twitter, Apple, and Charter Communications provided their testimonies to the Committee. The hearing took place to: examine privacy policies of top technology and communications firms, review the current state of consumer data privacy, and offer members the opportunity to discuss possible approaches to safeguarding privacy more effectively. John Thune opened the meeting by saying, “This hearing will provide leading technology companies and internet service providers an opportunity to explain their approaches to privacy, how they plan to address new requirements from the European Union and California, and what Congress can do to promote clear privacy expectations without hurting innovation.” There is,however, one major problem with this approach. A hearing on consumer privacy barring any participation from the consumer side is like a meeting to discuss women safety and empowerment without any woman on the board. Why would the administration do such a thing? They might just be not ready to bring all the sides in one room. They have had a second set of hearings with privacy advocates last week. But will this really bring a change in perspective? And where are we headed?   AT&T and net neutrality One of the key issues at hand in this story is net neutrality.. For those that don’t know, this is the principle that Internet service providers should allow access to all content and applications regardless of the source, and shouldn’t be able to favor or block particular products or websites. This basically means a democratic internet. The recent law ending net neutrality across the majority of U.S. states was arguably pushed and supported by major ISPs and corporations. This makes the declaration by AT&T stating that they want to uphold user privacy rules seem farcical, like a statement made by a hunter who is about to pounce on its prey and luring them with fake consolations. As one of the leading telecom companies, AT&T has a significant stake in the online advertising and direct TV industry. The more they can track you online and record your habits, the better they can push ads and continue to milk user data without them being informed. That was their goal when they manipulated the modest FCC user data privacy guidelines last year for broadband providers before they could even take effect. Those rules largely just mandated that ISPs be transparent about what data is collected and who it's being sold to, while requiring opt in consent for particularly sensitive consumer data like your financial background. When the same company rallies for user data privacy rules and tries to burden the social media and search engine giants like Facebook, Google, Microsoft etc, then there’s a definite doubt about their actual intent. The actual reason might just be to weaken the power of major tech companies like Google, facebook and push their own agenda via their broadband network. Monopoly in any form is not an ideal scenario for users and customers. While Google and Facebook are vying for a monopoly over how users interact online everyday,  AT&T is playing a different game altogether, that of gaining control of the internet itself. Google, though, has plans of laying their own internet cable under sea - it’s going to be hard for AT&T to compete, as admirable as its ostensible hubris might be. Still, there is a decent chance that it might become a two horse race by the middle of the next decade. Of course, the ultimate impact of this sort of monopoly remains to be seen. For AT&T, the opportunity is there, even if it looks like a big challenge. Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy, yesterday The U.S. Justice Department sues to block the new California Net Neutrality law California’s tough net neutrality bill passes state assembly vote
Read more
  • 0
  • 0
  • 9838

article-image-how-3-glitches-in-azure-active-directory-mfa-caused-a-14-hour-long-multi-factor-authentication-outage-in-office-365-azure-and-dynamics-services
Savia Lobo
29 Nov 2018
3 min read
Save for later

How 3 glitches in Azure Active Directory MFA caused a 14-hour long multi-factor authentication outage in Office 365, Azure and Dynamics services

Savia Lobo
29 Nov 2018
3 min read
Early this week, Microsoft posted a report on what caused the multi-factor authentication outage in its Office 365 and Azure last week, which prevented users from signing into their cloud services for 14 hours. Microsoft researchers reported that they found out three issues that combined to cause the log-in glitch. Interestingly, all these three glitches occurred within a single system, i.e. Azure Active Directory Multi-Factor Authentication, a service which Microsoft uses to monitor and manage multi-factor login for the Azure, Office 365, and Dynamics services. According to the Microsoft researchers, “There were three independent root causes discovered. In addition, gaps in telemetry and monitoring for the MFA services delayed the from identification and understanding of these root causes which caused an extended mitigation time." All three glitches occurred within a single system: Azure Active Directory Multi-Factor Authentication. Microsoft uses that service to handle multi-factor login for the Azure, Office 364, and Dynamics services. The three root causes for the multi-factor authentication outage Microsoft, in their report, discovered three independent root causes. They said that the gaps in telemetry and monitoring for the MFA services delayed the identification and understanding of these root causes, which caused an extended mitigation time. 1. The first root cause manifested as latency issue in the MFA frontend’s communication to its cache services. This issue began under high load once a certain traffic threshold was reached. Once the MFA services experienced this first issue, they became more likely to trigger second root cause. 2. The second root cause is a race condition in processing responses from the MFA backend server that led to recycles of the MFA frontend server processes which can trigger additional latency and the third root cause (below) on the MFA backend. The third identified root cause was previously undetected issue in the backend MFA server that was triggered by the second root cause. This issue causes accumulation of processes on the MFA backend leading to resource exhaustion on the backend at which point it was unable to process any further requests from the MFA frontend while otherwise appearing healthy in our monitoring. On the day of the outage, these glitches first hit EMEA and APAC customers, and the US subscribers. According to The Register, “Microsoft would eventually solve the problem by turning the servers off and on again after applying mitigations. Because the services had presented themselves as healthy, actually identifying and mitigating the trio of bugs took some time.” Microsoft said, "The initial diagnosis of these issues was difficult because the various events impacting the service were overlapping and did not manifest as separate issues”. The company is further looking into ways to prevent the repetition of such an outage in the future by reviewing how it handles updates and testing. They also plan to review its internal monitoring services and how it contains failures once they begin. To know more about this in detail, head over to Microsoft Azure’s official page. A Microsoft Windows bug deactivates Windows 10 Pro licenses and downgrades to Windows 10 Home, users report Microsoft fixing and testing the Windows 10 October update after file deletion bug Microsoft announces official support for Windows 10 to build 64-bit ARM apps  
Read more
  • 0
  • 0
  • 9833
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-elite-us-universities-including-mit-and-stanford-break-off-partnerships-with-huawei-and-zte-amidst-investigations-in-the-us
Sugandha Lahoti
04 Apr 2019
3 min read
Save for later

Elite US universities including MIT and Stanford break off partnerships with Huawei and ZTE amidst investigations in the US

Sugandha Lahoti
04 Apr 2019
3 min read
The Massachusetts Institute of Technology has broken off its partnerships with Chinese telecoms equipment makers Huawei and ZTE, amidst them facing US federal investigations. MIT follows suite moves by Stanford University, University of California’s flagship Berkeley and the University of Minnesota, who have all cut future research collaborations with Huawei. Late December, Huawei’s Chief Financial Officer, Wanzhou Meng, who is also the daughter of the company’s founder, was arrested in Canada. Huawei was allegedly involved in violating U.S.’ sanctions on Iran. Huawei was under constant scrutiny by the US government following the ban on ZTE from selling devices with American-made hardware and software. ZTE was also found guilty of violating US sanctions on Iran. Then in January, the U.S. Government officially charged Huawei for stealing T-Mobile’s trade secrets along with bank fraud to violate U.S. sanctions on Iran. Only a month had passed when Huawei came in the light again for using dirty tactics to steal Apple’s trade secrets. U.S. companies such as Motorola and Cisco Systems have made similar claims against Huawei in civil lawsuits. A Chicago-based company, Akhan Semiconductor even cooperated with a federal investigation into a theft of its intellectual property by Huawei. Huawei’s power in the mobile telecommunications sector and blatant ignorance of cybersecurity laws is alarming. FBI Director Christopher Wray said the cases “expose Huawei’s brazen and persistent actions to exploit American companies and financial institutions and to threaten the free and fair global marketplace. That kind of access could give a foreign government the capacity to maliciously modify or steal information, conduct undetected espionage, or exert pressure or control.” In a letter sent to the faculty on Wednesday, Richard Lester, MIT’s associate provost, and Maria Zuber, the school’s vice-president for research, said, “At this time, based on this enhanced review, MIT is not accepting new engagements or renewing existing ones with Huawei and ZTE or their respective subsidiaries due to federal investigations regarding violations of sanction restrictions.” The letter further stated, “Most recently we have determined that engagements with certain countries – currently China [including Hong Kong], Russia and Saudi Arabia – merit additional faculty and administrative review beyond the usual evaluations that all international projects receive.” Since Huawei’s ban in the US, the country is trying to prevent its allies from using Huawei technology for critical infrastructure, especially focusing on the five English speaking countries also known as the Five Eyes (US, Canada, New Zealand, Australia, Great Britain). Australia and New Zealand have so far stopped operators from using Huawei equipment in their 5G networks. In the EU however, policymakers have made it a mandate for EU nations to share data on 5G cybersecurity risks and produce measures to tackle them by the end of the year. “The aim is to use tools available under existing security rules plus cross-border cooperation,” the bloc’s executive body said. Now, it is upto individual EU countries to decide whether they want to ban any company on national security grounds. China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information Cisco and Huawei Routers hacked via backdoor attacks and botnets Huawei launches HiAI
Read more
  • 0
  • 0
  • 9825

article-image-developers-should-be-in-charge-of-application-security-whitesource-security-report
Savia Lobo
24 Jul 2019
6 min read
Save for later

Developers should be in charge of Application security: Whitesource security report

Savia Lobo
24 Jul 2019
6 min read
Security these days is a major concern for all organizations dealing with user data. We have newer apps being developed daily, crunching in user data to provide users with better services, great deals, discounts, and much more. Application security has become one of the top priorities and needs to be taken care of at every stage of software development. Hence, over the years software testing has shifted from testing just before release to testing during the early stages of the software development lifecycle (SDLC). This helps developers to discover vulnerabilities during early stages and to tackle them easily with lesser efforts.  A recent report from WhiteSource, an open-source security and license compliance management platform, highlights how developers should be in charge of application security and how organizations are investing heavily to produce secure code. The development team should be in charge of software security According to a Whitesource report, “for the day-to-day operational responsibility for application security with 71% of the respondents stating the ownership lies in the software development side, whether it is by the DevOps teams, the development team leaders or the developers themselves.” This is because fixing the vulnerability in the development or coding phase produces better-secured applications. And, if these are handled by development teams, security teams can focus on other bigger security aspects for the organization, on the whole. In comparison to the previous waterfall method where software testing was done before the release, after adopting a DevOps approach, the testing has moved to early phases to avoid bottlenecks at a later stage.  Whitesource report says, “the 36% of organizations have moved past the initial implementation at testing at the build stage and are starting to integrate security testing tools at earlier points in the SDLC like the IDE and their repositories”. How are organizations investing in secure code? It is possible for a vulnerability to escape the final test rounds and affect users after being released in the market. This can bring in customer dissatisfaction, bad reviews towards the application, customer loss, and many other disadvantages. In such cases, organizations are trying their best to resolve vulnerabilities by testing tools, training, and time spent on handling security vulnerabilities, the Whitesource report says. “Along with training, developers are tooling up with a range of application security testing (AST) technologies with 68% of developers reporting using at least one of the following technologies: SAST, DAST, SCA, IAST or RASP”, the report says. For organizations that are working with DevOps, the question is not if they should integrate automated tools into their pipeline, but which ones should they adopt first. [box type="shadow" align="" class="" width=""] Static Application Security Testing (SAST) is also known as “white-box testing” and allows developers to know about security vulnerabilities in the application source code earlier in SDLC. Dynamic Application Security Testing (DAST) also known as “black-box testing” helps to find security vulnerabilities and weaknesses in a running application(web apps). Interactive Application Security Testing (IAST) combines static and dynamic techniques to improve testing. According to Veracode, IAST analyzes code for security vulnerabilities while the app is run by an automated test, human tester, or any activity “interacting” with the application functionality. Run-time Application Security Protection (RASP) lets an app run continuous security checks on itself and respond to live attacks by terminating an attacker’s session and alerting defenders to the attack. [/box] Security in the development phase, an added task for developers With the help of such technologies (SAST, DAST, SCA, IAST or RASP), issues can be notified before and after production, thus, adding visibility to the application’s security and also enable teams to be proactive. However, the issue may be constantly thrown at the developers which they will have to research and remediate. “It is unreasonable to ask developers to handle all security alerts, especially as most application security tools are developed for security teams focused on coverage (detecting all potential issues), rather than accuracy and prioritization”, the Whitesource team mentions. The report states, “Developers claim that they are spending a considerable amount of their time on dealing with remediations, with 42% reporting that they spend between 2 to 12 hours a month on these tasks, while another 33% say that they spend 12 to 36 hours on them.” How can developers ensure security while choosing their open-source component? Developers said they check for known vulnerabilities when they choose an open-source component. This ensures “their open source components are secure from the earliest stages of development”. The Whitesource team shows a graph where survey “respondents from North America (the U.S. and Canada) showed a higher level of awareness to check the vulnerability status of the open-source components that they were choosing.” For the Europeans though, open source compliance rated higher on their priorities. On asking respondents how their organization detects vulnerable open source components in their applications,  34% of them said they have tools that continuously detect open source vulnerabilities in their applications 28% of them use a code scanner to review software once or twice a year 14% manually check for open source vulnerabilities, but only for the high severity ones 24% said the security team notifies them Once developers discover the known vulnerability in their product they need to find a quick and effective path to remediating it. Most of them turn first to GitHub’s Security Alerts tool for help, Whitesource reports. The graph below shows other free security tools in the market similar to GitHub.  Detection vs Remediation of vulnerabilities Developers take a more proactive approach to detect vulnerabilities. However, the same isn’t applicable when it comes to vulnerability remediation. “25% of developers only report on detected vulnerabilities and 53% are taking actions only in specific cases,” the report states. “Developers are investing many hours is research and remediation so why aren’t we seeing more developers taking action? The reason probably lies in the fact that most application security tools' main goal is to detect, alert and report.” We cannot just blame developers if there is a vulnerability found. They also need to have the same quality of tools that speeds up the process for vulnerability remediation. Talking about manual processes, they are time-consuming and require a certain amount of skill set, which are certain challenges faced.  Whitesource concludes that next-generation application security tools will be those that are developer-focused, closing the loop from detecting of an issue, all the way through validation, research, and remediation of the issue. To know about this survey in detail, read Whitesource Developer security report. Kazakhstan government intercepts nationwide HTTPS traffic to re-encrypt with a govt-issued root certificate – Cyber-security or Cyber-surveillance? “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices Introducing Abscissa, a security-oriented Rust application framework by iqlusion
Read more
  • 0
  • 0
  • 9764

article-image-security-researcher-exposes-malicious-github-repositories-that-host-more-than-300-backdoored-apps
Savia Lobo
05 Mar 2019
2 min read
Save for later

Security researcher exposes malicious GitHub repositories that host more than 300 backdoored apps

Savia Lobo
05 Mar 2019
2 min read
Security researcher expose malicious GitHub repositories that host more than 300 backdoored apps An unnamed security researcher at dfir.it recently revealed certain GitHub accounts that host more than “300 backdoored Windows, Mac, and Linux applications and software libraries”. The researcher in his blog titled, “The Supreme Backdoor Factory” explained how he stumbled upon this malicious code and various other codes within the GitHub repo. The investigation started when the researcher first spotted a malicious version of the JXplorer LDAP browser. The researcher in his blog post states, “I did not expect an installer for a quite popular LDAP browser to create a scheduled task in order to download and execute PowerShell code from a subdomain hosted by free dynamic DNS provider.” According to ZDNet, “All the GitHub accounts that were hosting these files --backdoored versions of legitimate apps-- have now been taken down.” The malicious files included codes which could allow boot persistence on infected systems and further download other malicious code. The researcher has also mentioned that the malicious apps downloaded a Java-based malware named Supreme NYC Blaze Bot (supremebot.exe). “According to researchers, this appeared to be a "sneaker bot," a piece of malware that would add infected systems to a botnet that would later participate in online auctions for limited edition sneakers”, ZDNet reports. The researcher revealed that some of the malicious entries were made via an account with the name of Andrew Dunkins that included a set of nine repositories, each hosting Linux cross-compilation tools. Each repository was watched or starred by several already known suspicious accounts. The report mentions that accounts that did not host backdoored apps were used to ‘star’ or ‘watch’ the malicious repositories and help boost their popularity in GitHub's search results. To know about these backdoored apps in detail, read the complete report, ‘The Supreme Backdoor Factory’ Brave Privacy Browser has a ‘backdoor’ to remotely inject headers in HTTP requests: HackerNews Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with cryptominers Cisco and Huawei Routers hacked via backdoor attacks and botnets  
Read more
  • 0
  • 0
  • 9756

article-image-australia-passes-a-rushed-anti-encryption-bill-to-make-australians-safe-experts-find-dangerous-loopholes-that-compromise-online-privacy-and-safety
Sugandha Lahoti
07 Dec 2018
3 min read
Save for later

Australia passes a rushed anti-encryption bill “to make Australians safe”; experts find “dangerous loopholes” that compromise online privacy and safety

Sugandha Lahoti
07 Dec 2018
3 min read
On Thursday, Australia passed a rushed assistance and access bill which will allow Australian police and government the powers to issue technical notices. The Labor party had planned to amend the legislation. However, even after calling the bill flawed, Labor pulled its amendments in the Senate and the bill was passed. "Let's just make Australians safer over Christmas," Bill Shorten, leader of the Opposition and Labor Party said on Thursday evening. "It's all about putting people first." The assistance and access bill provides vague answers on the potential power that it could give government and law enforcement over digital privacy. The government claims that encrypted communications are “increasingly being used by terrorist groups and organized criminals to avoid detection and disruption,” and so this bill will ask tech companies to provide assistance to them in accessing electronic data. Per Zdnet, under the new assistance and access bill, Australian government agencies can issue three notices to companies and websites: Technical Assistance Notices (TAN), which are compulsory notices for a communication provider to use an interception capability they already have. Technical Capability Notices (TCN), which are compulsory notices for a communication provider to build a new interception capability, so that it can meet subsequent Technical Assistance Notices. Technical Assistance Requests (TAR), which have been described by experts as the most dangerous of all. Basically, the Australian government can hack, implant malware, undermine encryption or insert backdoors across companies and websites. If companies refuse, they may face financial penalties. Although the government has said this bill will target criminals in the likes of sex offenders, terrorists, homicide and drug offenses, critics think otherwise. According to communications alliance, the bill contains dangerous loopholes and technical backdoors that could be exploited by hackers. Another issue of debate was the lack of a clear definition of the term, “systemic weakness.” Labor has asked for a more concrete definition of it in the amendments made on the law next year. Several lawmakers, as well as the general public, condemned the bill on Twitter pointing out it’s rushed release. https://twitter.com/timwattsmp/status/1069361402589011968?s=21 https://twitter.com/jordonsteele/status/1070170310626828288?s=12 https://twitter.com/Asher_Wolf/status/1070692137052758016 https://twitter.com/Scottludlam/status/1070592908292612096 https://twitter.com/Jordonsteele/status/1070565215106818048 https://twitter.com/AdamBandt/status/1070492876365225985 The State of Mozilla 2017 report focuses on internet health and user privacy. Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling” Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee
Read more
  • 0
  • 0
  • 9743
article-image-chrome-69-privacy-issues-automatic-sign-ins-and-retained-cookies-chrome-70-to-correct-these
Prasad Ramesh
27 Sep 2018
4 min read
Save for later

Chrome 69 privacy issues: automatic sign-ins and retained cookies; Chrome 70 to correct these

Prasad Ramesh
27 Sep 2018
4 min read
There are privacy concerns with Chrome 69, the latest release of the popular browser. The concerns revolve around signing into Chrome and the storage of cookies which have been changed in the new release. What are the privacy concerns with Chrome 69? The Google Chrome 69 update brought a new interface, UI changes and a feature that would automatically sign you into Chrome if you signed into any of Google’s services. This was met with heavy criticism from privacy conscious users. This is not the first time Google has been in question regarding user privacy and the data they collect. Google changed their privacy policy to circumvent GDPR fines in the scale of billions of dollars. Previously, users had an option to signin too Chrome with their Google credentials, but the Chrome 69 update changes it. Signing into any Google service would automatically sign you into Chrome. But Google noted that this would not turn on the sync feature by default. Another concern with Chrome 69 is that on clearing all browsing history and cookies, everything gets cleared excluding Google sites. So, on clearing all browsing history and data, you’re still left with Google cookies and data in your desktop if you’re using Chrome. Source: Google Blog What are people saying? In a blog, John Hopkins professor Matthew Green stated: “Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern.” Christoph Tavan, CTO & Co-Founder of @contentpass tweeted that cookies from Google sites remain in your machine even after clearing all browser data. https://twitter.com/ctavan/status/1044282084020441088 John Graham-Cumming, Cloudflare CTO tweeted that he won’t be using Chrome anymore: https://twitter.com/jgrahamc/status/1044123160243826688 A comment on reddit reads: “This is actually ok. It's not incredibly invasive, and it just creates a chrome user profile when you sign in. They say that it will solve the confusion of the two separate sign ins.” What does Google have to say about this? Chrome 70 to be released in mid October will rollback this move. In a blog Zach Koch, Chrome Product Manager states: “While we think sign-in consistency will help many of our users, we’re adding a control that allows users to turn off linking web-based sign-in with browser-based sign-in—that way users have more control over their experience. For users that disable this feature, signing into a Google website will not sign them into Chrome.” ‏Google Chrome engineer Adrienne Porter Felt replied with an explanation as to why automatic sign in was turned on by default in Chrome 69. Porter stated that the intent is to prevent a ‘common’ confusion where the login state of the browser ends up being different from the login state of the content area. The reply from a Google engineer is not sufficient, notes Green. In the Chrome blog post they also addressed the concerns with cookies by stating: “We’re also going to change the way we handle the clearing of auth cookies. In the current version of Chrome, we keep the Google auth cookies to allow you to stay signed in after cookies are cleared. We will change this behavior so that all cookies are deleted and you will be signed out.” Ending thoughts It is concerning that singing into any Google product automatically signs you into Chrome. Moreover, syncing is just an accidental click away, many people wouldn’t want their data to be synced like that. If sync is not turned on by default then why are they signing you in by default in the first place? Makes sense where multiple accounts are in play, but in any case there should be a prompt for signing into Chrome that makes users consciously choose to sign in. The next step might have been auto sync on login, had not the user backlash happened. This design choice has definitely eroded trust and goodwill among many Chrome users, some of whom are now seriously looking for viable alternatives. Google Chrome’s 10th birthday brings in a new Chrome 69 Microsoft Cloud Services get GDPR Enhancements Google’s new Privacy Chief officer proposes a new framework for Security Regulation
Read more
  • 0
  • 0
  • 9705

article-image-apple-announces-the-ios-12-1-4-with-a-fix-for-its-group-facetime-video-bug
Savia Lobo
08 Feb 2019
2 min read
Save for later

Apple announces the iOS 12.1.4 with a fix for its Group FaceTime video bug

Savia Lobo
08 Feb 2019
2 min read
Yesterday, Apple announced the release of iOS 12.1.4 to fix Apple’s Group FaceTime video bug discovered during the end of last month. Apple immediately disabled this bug that allowed callers to eavesdrop on people before they could even pick up their phone. Apple also plans to reward the 14-year-old Grant Thompson and his mother for first reporting the bug. Apple is “compensating the Thompson family for discovering the vulnerability and providing an additional gift to fund Grant Thompson’s tuition”, the Verge reports. As reported by TechCrunch, an Apple spokesperson told them in a statement, “In addition to addressing the bug that was reported, our team conducted a thorough security audit of the FaceTime service and made additional updates to both the FaceTime app and server to improve security. This includes a previously unidentified vulnerability in the Live Photos feature of FaceTime.” Source: The Verge “To protect customers who have not yet upgraded to the latest software, we have updated our servers to block the Live Photos feature of FaceTime for older versions of iOS and macOS”, Apple reports. To know more about this news in detail, head over to The Verge. Apple reinstates Facebook and Google Developer Certificates, restores the ability to run internal iOS apps Apple revoked Facebook developer certificates due to misuse of Apple’s Enterprise Developer Program; Google also disabled its iOS research app Apple disables Group FaceTime till it fixes a security flaw that gave access to microphone and camera of users, even before picking up the call  
Read more
  • 0
  • 0
  • 9689

article-image-a-microsoft-windows-bug-deactivates-windows-10-pro-licenses-and-downgrades-to-windows-10-home-users-report
Savia Lobo
09 Nov 2018
2 min read
Save for later

A Microsoft Windows bug deactivates Windows 10 Pro licenses and downgrades to Windows 10 Home, users report

Savia Lobo
09 Nov 2018
2 min read
Yesterday, Microsoft users reported that a bug has affected the Microsoft Windows activation service which causes the Windows 10 Pro licenses to be downgraded to Windows 10 Home, says Bleepingcomputer. Following this, the bug flashes a message on user’s screen stating that their license is not activated and prompting the user to troubleshoot the problems. Windows 10 Pro not activated Source: Bleeping Computers Troubleshooting Completed Source: Bleeping Computers According to a Reddit post, "Microsoft has just released an Emerging issue announcement about current activation issue related to Pro edition recently. This happens in Japan, Korea, American, and many other countries. I am very sorry to inform you that there is a temporary issue with Microsoft's activation server at the moment and some customers might experience this issue where Windows is displayed as not activated. Our engineers are working tirelessly to resolve this issue and it is expected to be corrected within one to two business days." Jeff Jones, Sr. Director, Microsoft., told BleepingComputer, “We’re working to restore product activations for the limited number of affected Windows 10 Pro customers.” He added, “A limited number of customers experienced an activation issue that our engineers have now addressed. Affected customers will see a resolution over the next 24 hours as the solution is applied automatically. In the meantime, they can continue to use Windows 10 Pro as usual.” To know more about this news, head over to Bleeping Computers. Microsoft announces .NET standard 2.1 Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report
Read more
  • 0
  • 0
  • 9630
article-image-click2gov-software-vulnerable-for-the-second-time-breach-hits-8-us-cities
Savia Lobo
20 Sep 2019
4 min read
Save for later

Click2Gov software vulnerable for the second time; breach hits 8 US cities

Savia Lobo
20 Sep 2019
4 min read
A vulnerable municipality software, Click2Gov, is known to be part of a breach involving eight cities last month, Threatpost reports. The Click2Gov software is used in self-service bill-paying portals used by utilities and community development organizations for paying parking tickets online etc. This is not the first time the software vulnerability has affected a huge bunch of people. The flaw was first discovered in December 2018, where using the vulnerable software, hackers compromised over 300,000 payment card records from dozens of cities across the United States and Canada between 2017 and late 2018. Also Read: Researchers reveal a vulnerability that can bypass payment limits in contactless Visa card Hackers are taking a second aim at Click2Gov The team of researchers at Gemini Advisory who covered the breach in 2018 have now observed a second wave of Click2Gov breaches beginning in August 2019 and affecting over 20,000 records from eight cities across the United States. The portals of six of the eight cities had been compromised in the initial breach. They also revealed that these user records have been offered for sale online via illicit markets. The impacted towns include Deerfield Beach, Fla., Palm Bay, Fla., Milton, Fla., Coral Springs. Fla., Bakersfield Calif., Pocatello Ida., Broken Arrow, Okla. and Ames, Iow “While many of the affected cities have patched their systems since the original breach, it is common for cybercriminals to strike the same targets twice. Thus, several of the same cities were affected in both waves of breaches,”  the Gemini Advisory researchers write in their official post. The researchers said, “Analysts confirmed that many of the affected towns were operating patched and up-to-date Click2Gov systems but were affected nonetheless. Given the success of the first campaign, which generated over $1.9 million in illicit revenue, the threat actors would likely have both the motive and the budget to conduct a second Click2Gov campaign,” they further added. Also Read: Apple Card, iPhone’s new payment system, is now available for select users According to a FireEye report published last year, in the 2018 attack, attackers compromised the Click2Gov webserver. Due to the vulnerability, the attacker was able to install a web shell, SJavaWebManage, and then upload a tool that allowed them to parse log files, retrieve payment card information and remove all log entries. Superion (now CentralSquare Technologies and owner of the Click2Gov software) acknowledged directly to Gemini Advisory that despite broad patch deployment the system remains vulnerable for an unknown reason. On similar lines of this year’s attack, researchers say “the portal remains a viable attack surface. These eight cities were in five states, but cardholders in all 50 states were affected. Some of these victims resided in different states but remotely transacted with the Click2Gov portal in affected cities, potentially due to past travels or to owning property in those cities.” Map depicting cities affected only by the original Click2Gov breach (yellow) and those affected by the second wave of Click2Gov breaches (blue). Source: Gemini Advisory These eight towns were contacted by Threatpost wherein most of them did not respond. However, some towns confirmed the breach in their Click2Gov utility payment portals. Some even took their Click2Gov portals offline shortly after contact. CentralSquare Technologies did not immediately comment on this scenario. To know more about this news in detail, read Gemini Advisory’s official post. Other news in security MITRE’s 2019 CWE Top 25 most dangerous software errors list released Emotet, a dangerous botnet spams malicious emails, “targets 66,000 unique emails for more than 30,000 domain names” reports BleepingComputer An unsecured Elasticsearch database exposes personal information of 20 million Ecuadoreans including 6.77M children under 18
Read more
  • 0
  • 0
  • 9606

article-image-upcoming-firefox-update-will-by-default-protect-users-privacy-by-blocking-ad-tracking
Melisha Dsouza
31 Aug 2018
3 min read
Save for later

Upcoming Firefox update will, by default, protect users privacy by blocking ad tracking

Melisha Dsouza
31 Aug 2018
3 min read
Mozilla is taking a stand against web advertising practices with an announcement today that its Firefox browser will soon block web trackers by default. Users can expect a series of updates over the next few months while this feature comes into reality. This proactive approach to protect consumer privacy, aims to give them more choice over what information they share with third party sites. Mozilla has been always in the forefront of giving users the assurity of data privacy. They started off by blocking pop-up ads in the very first public Firefox release in 2004. The wholesale blocking of ads and trackers in private browsing mode starting in 2015 is another testament to the fact. Mozilla has made it clear that even though some sites will continue to want user data in exchange for content, they will have to ask users for it. This gives advertising platforms a reason to care about their users’ experience and is a positive change for people who up until now had no idea of the value exchange they were asked to make. Mozilla’s three key initiatives to put this approach into practice: #1 Improving page load performance A new feature will be introduced in Firefox Nightly that will blocks trackers slowing down page loads. Loading third party trackers makes it slow for a website to load as a whole. For users on slower networks, the effect is worse. This messes with the user’s experience on the web. Firefox will study the effects of blocking trackers and test the new feature using a shield study in September.  If the approach succeeds in improving page performance well, slow-loading trackers will be blocked by default in Firefox 63. #2 Removing cross-site tracking Users expect a certain level of privacy on the web. However, many web browsers fail to help users obtain the level of privacy that they should be entitled to. Taking this into account, Firefox will strip cookies and block storage access from third-party tracking content. This is already available for Firefox Nightly users to try out. A shield study will be carried out with some beta users in September to check this feature. All Firefox 65 users can expect this update coming their way soon. After all, no one appreciates the thought of being constantly tracked by third-party sites to obtain information in secret. #3 Mitigating harmful practices The third approach Mozilla is taking is to block harder-to-detect practices like fingerprinting-a technique that allows them to invisibly identify users by their device properties. This will also put a stop on crypto mining scripts that silently mine cryptocurrencies on the user’s device. The Twitter community has received this news well and many Firefox users have expressed their appreciation over this initiative. Source: Twitter The November release of Firefox 57, added an option to let people block all trackers. Worldwide, 1.3 percent of people enable Firefox tracking protection today which means out of   250 million monthly active users, it represents the choice of about 3 million people. Now as a bonus, users can block add trackers as well! Source: Cnet.com This goes to show the level of trust that users have in Firefox and we are sure that like always firefox will not disappoint. You can read the detailed news of the upcoming update on Mozilla’s official blog. Mozilla’s new Firefox DNS security updates spark privacy hue and cry Mozilla, Internet Society, and web foundation wants G20 to address “techlash” fuelled by security and privacy concerns Firefox Nightly’s Secure DNS Experimental Results out  
Read more
  • 0
  • 0
  • 9558
Modal Close icon
Modal Close icon