Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7007 Articles
article-image-mitres-2019-cwe-top-25-most-dangerous-software-errors-list-released
Savia Lobo
19 Sep 2019
10 min read
Save for later

MITRE’s 2019 CWE Top 25 most dangerous software errors list released

Savia Lobo
19 Sep 2019
10 min read
Two days ago, the Cybersecurity and Infrastructure Security Agency (CISA) announced MITRE’s 2019 Common Weakness Enumeration (CWE) Top 25 Most Dangerous Software Errors list. This list includes a compilation of the most frequent and critical errors that can lead to serious vulnerabilities in software. For aggregating the data for this list, the CWE Team used a data-driven approach that leverages published Common Vulnerabilities and Exposures (CVE®) data and related CWE mappings found within the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD), as well as the Common Vulnerability Scoring System (CVSS) scores associated with the CVEs. The team then applied a scoring formula (elaborated in later sections) to determine the level of prevalence and danger each weakness presents.  This Top 25 list of errors include the NVD data from the years 2017 and 2018, which consisted of approximately twenty-five thousand CVEs. The previous SANS/CWE Top 25 list was released in 2011 and the major difference between the lists released in the year 2011 and the current 2019 is in the approach used. In 2011, the data was constructed using surveys and personal interviews with developers, top security analysts, researchers, and vendors. These responses were normalized based on the prevalence, ranked by the CWSS methodology. However, in the 2019 CWE Top 25, the list was formed based on the real-world vulnerabilities found in NVD’s data. CWE Top 25 dangerous software errors, developers should watch out for  Improper Restriction of Operations within the Bounds of a Memory Buffer In this error(CWE-119), the software performs operations on a memory buffer. However, it can read from or write to a memory location that is outside of the intended boundary of the buffer. The likelihood of exploit of this error is high as an attacker may be able to execute arbitrary code, alter the intended control flow, read sensitive information, or cause the system to crash.  This error can be exploited in any programming language without memory management support to attempt an operation outside of the bounds of a memory buffer, but the consequences will vary widely depending on the language, platform, and chip architecture. Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') This error, CWE-79, can cause the software to incorrectly neutralize the user-controllable input before it is placed in output that is used as a web page that is served to other users. Once the malicious script is injected, the attacker could transfer private information, such as cookies that may include session information, from the victim's machine to the attacker. This error can also allow attackers to send malicious requests to a website on behalf of the victim, which could be especially dangerous to the site if the victim has administrator privileges to manage that site. Such XSS flaws are very common in web applications since they require a great deal of developer discipline to avoid them. Improper Input Validation With this error, CWE-20, the product does not validate or incorrectly validates input thus affecting the control flow or data flow of a program. This can allow an attacker to craft the input in a form that is not expected by the rest of the application. This will lead to parts of the system receiving unintended input, which may result in an altered control flow, arbitrary control of a resource, or arbitrary code execution. Input validation is problematic in any system that receives data from an external source. “CWE-116 [Improper Encoding or Escaping of Output] and CWE-20 have a close association because, depending on the nature of the structured message, proper input validation can indirectly prevent special characters from changing the meaning of a structured message,” the researchers mention in the CWE definition post.  Information Exposure This error, CWE-200, is the intentional or unintentional disclosure of information to an actor that is not explicitly authorized to have access to that information. According to the CEW- Individual Dictionary Definition, “Many information exposures are resultant (e.g. PHP script error revealing the full path of the program), but they can also be primary (e.g. timing discrepancies in cryptography). There are many different types of problems that involve information exposures. Their severity can range widely depending on the type of information that is revealed.” This error can be executed for specific named Languages, Operating Systems, Architectures, Paradigms(Mobiles), Technologies, or a class of such platforms. Out-of-bounds Read In this error, CWE-125, the software reads data past the end, or before the beginning, of the intended buffer. This can allow attackers to read sensitive information from other memory locations or cause a crash. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer.  This error may occur for specific named Languages (C, C++), Operating Systems, Architectures, Paradigms, Technologies, or a class of such platforms.  Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') In this error (weakness ID: CWE-89) the software constructs all or part of an SQL command using externally-influenced input from an upstream component. However, it incorrectly neutralizes special elements that could modify the intended SQL command when it is sent to a downstream component. This error can be used to alter query logic to bypass security checks, or to insert additional statements that modify the back-end database, possibly including the execution of system commands. It can occur in specific named Languages, Operating Systems, Architectures, Paradigms, Technologies, or a class of such platforms. Cross-Site Request Forgery (CSRF) This error CWE-352, the web application does not, or can not, sufficiently verify whether a well-formed, valid, consistent request was intentionally provided by the user who submitted the request. This might allow an attacker to trick a client into making an unintentional request to the webserver which will be treated as an authentic request. The likelihood of the occurrence of this error is medium.  This can be done via a URL, image load, XMLHttpRequest, etc. and can result in the exposure of data or unintended code execution.  Integer Overflow or Wraparound In this error, CWE-190, the software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') In this error, CWE-22, the software uses external input to construct a pathname that is intended to identify a file or directory that is located underneath a restricted parent directory. However, the software does not properly neutralize special elements within the pathname that can cause the pathname to resolve to a location that is outside of the restricted directory.  In most programming languages, injection of a null byte (the 0 or NUL) may allow an attacker to truncate a generated filename to widen the scope of attack. For example, when the software adds ".txt" to any pathname, this may limit the attacker to text files, but a null injection may effectively remove this restriction. Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') In this error, CWE-78, the software constructs all or part of an OS command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended OS command when it is sent to a downstream component. This error can allow attackers to execute unexpected, dangerous commands directly on the operating system. This weakness can lead to a vulnerability in environments in which the attacker does not have direct access to the operating system, such as in web applications.  Alternately, if the weakness occurs in a privileged program, it could allow the attacker to specify commands that normally would not be accessible, or to call alternate commands with privileges that the attacker does not have. The researchers write, “More investigation is needed into the distinction between the OS command injection variants, including the role with argument injection (CWE-88). Equivalent distinctions may exist in other injection-related problems such as SQL injection.” Here’s the list of the remaining errors from MITRE’s 2019 CWE Top 25 list: CWE ID Name of the Error Average CVSS score CWE-416 Use After Free 17.94 CWE-287 Improper Authentication 10.78 CWE-476 NULL Pointer Dereference 9.74 CWE-732  Incorrect Permission Assignment for Critical Resource 6.33 CWE-434 Unrestricted Upload of File with Dangerous Type 5.50 CWE-611 Improper Restriction of XML External Entity Reference 5.48 CWE-94 Improper Control of Generation of Code ('Code Injection') 5.36 CWE-798 Use of Hard-coded Credentials 5.1 CWE-400 Uncontrolled Resource Consumption 5.04 CWE-772  Missing Release of Resource after Effective Lifetime 5.04 CWE-426  Untrusted Search Path 4.40 CWE-502 Deserialization of Untrusted Data 4.30 CWE-269 Improper Privilege Management 4.23 CWE-295 Improper Certificate Validation 4.06 To know about the other errors in detail, read CWE’s official report. Scoring formula to calculate the rank of weaknesses The CWE team had developed a scoring formula to calculate a rank order of weaknesses. The scoring formula combines the frequency that a CWE is the root cause of vulnerability with the projected severity of its exploitation. In both cases, the frequency and severity are normalized relative to the minimum and maximum values seen.  A few properties of the scoring method include: Weaknesses that are rarely exploited will not receive a high score, regardless of the typical severity associated with any exploitation. This makes sense, since if developers are not making a particular mistake, then the weakness should not be highlighted in the CWE Top 25. Weaknesses with a low impact will not receive a high score. This again makes sense, since the inability to cause significant harm by exploiting a weakness means that weakness should be ranked below those that can.  Weaknesses that are both common and can cause harm should receive a high score. However, there are a few limitations to the methodology of the data-driven approach chosen by the CWE Team. Limitations of the data-driven methodology This approach only uses data publicly reported and captured in NVD, while numerous vulnerabilities exist that do not have CVE IDs. Vulnerabilities that are not included in NVD are therefore excluded from this approach. For vulnerabilities that receive a CVE, often there is not enough information to make an accurate (or precise) identification of the appropriate CWE that is exploited.  There is an inherent bias in the CVE/NVD dataset due to the set of vendors that report vulnerabilities and the languages that are used by those vendors. If one of the largest contributors to CVE/NVD primarily uses C as its programming language, the weaknesses that often exist in C programs are more likely to appear.  Another bias in the CVE/NVD dataset is that most vulnerability researchers and/or detection tools are very proficient at finding certain weaknesses but do not find other types of weaknesses. Those types of weaknesses that researchers and tools struggle to find will end up being under-represented within 2019 CWE Top 25. Gaps or perceived mischaracterizations of the CWE hierarchy itself lead to incorrect mappings.  In Metric bias, it indirectly prioritizes implementation flaws over design flaws, due to their prevalence within individual software packages. For example, a web application may have many different cross-site scripting (XSS) vulnerabilities due to a large attack surface, yet only one instance of the use of an insecure cryptographic algorithm. https://twitter.com/mattfahrner/status/1173984732926943237 To know more about this CWE Top 25 list in detail, head over to MITRE’s CWE Top 25 official page.  Other news in Security LastPass patched a security vulnerability from the extensions generated on pop-up windows An unsecured Elasticsearch database exposes personal information of 20 million Ecuadoreans including 6.77M children under 18 A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports
Read more
  • 0
  • 0
  • 34292

article-image-as-kickstarter-reels-in-the-aftermath-of-its-alleged-union-busting-move-is-the-tech-industry-at-a-tipping-point
Vincy Davis
18 Sep 2019
12 min read
Save for later

As Kickstarter reels in the aftermath of its alleged union-busting move, is the tech industry at a tipping point?

Vincy Davis
18 Sep 2019
12 min read
Update: With the immense dissent accorded on Kickstarter since its union-busting move of firing two of its union organizing workers, many would have hoped the public-benefit corporation company to switch its stand.  However, in an e-mail statement to CurrentAffairs on September 28th, Kickstarter’s CEO, Aziz Hasan, has made the company’s standpoint firm and clear. Hasan confirmed that Kickstarter is standing by its decision to fire the organizers and would be fighting the lawsuit filed by them with the National Labor Relations Board. He went on to confirm the company’s perspective of not voluntarily recognizing a union, even if the majority of its workers signed in support of a union. He also dubbed the union framework as “inherently adversarial”. Hassan also pledged not to remain neutral on unionization and said that Kickstarter would continue to actively oppose unionization efforts.  https://twitter.com/NathanJRobinson/status/1177668530864607232 After Kickstarter’s stand was made public on Twitter, many influential and past successful Kickstarter project initiators have openly pledged to boycott Kickstarter for future projects. https://twitter.com/vornietom/status/1178203986890899457 https://twitter.com/joshfoxfilm/status/1177721539015385094 https://twitter.com/neilhimself/status/1178046366137892865 https://twitter.com/GunnerGale/status/1178239961314840576 Update: Following the alleged union-busting move at Kickstarter, many workers around the world came out in support of the two fired employees.   On September 24th, HCL employed 80 Google contract workers in Pittsburgh voted in favor of unionizing with the United Steel Workers (USW). They will now be working under the name Pittsburgh Association of Tech Professionals (PATP). Per Vice, the main aim of the union is to create an umbrella organization to facilitate organizing in the tech sector, and eventually to help tech locals coordinate and cooperate amongst themselves. According to Motherboard, HCL America’s deputy general manager of operations had also sent out emails, before the voting, to all the contractors in an attempt to prevent them from unionizing. https://twitter.com/veenadubal/status/1176631300498657280 The vote to unionize is a historic movement among tech workers and will motivate others to come forward and form unions. Meredith Whittaker, ex-Google Walkout organizer, who had left Google over ethical concerns has come out in support of Google contractors unionizing. https://twitter.com/mer__edith/status/1176581810035273728 The HCL workers have also expressed their solidarity with Kickstarter workers. https://twitter.com/SethGoldstein13/status/1174815837762531328 HCL America has not yet officially commented on its workers unionizing. Head over to Vice, for the full coverage on Google workers unionizing. [dropcap]Past week[/dropcap] has been quite rough for many at Kickstarter, the American public-benefit corporation. Kickstarter fired two of its employees, Taylor Moore, Head of Comedy and Podcasts at Kickstarter and Clarissa Redwine, former Senior Design & Tech Outreach Lead at Kickstarter, citing performance issues. Both workers have openly protested against Kickstarter on Twitter. https://twitter.com/taylordotbiz/status/1172257828473573377 https://twitter.com/ClarissaRedwine/status/1172167251623124997 The workers have claimed that there has been no performance issues from their end and their dismissal is the result of their participation in the unionization effort within the company. In her own words, Redwine says, “Kickstarter's management continues to state that I was fired for performance issues. I find this strange because I not only met but exceeded all performance metrics in Q2. I was great at my job. And I loved it.” Moore also informed on Twitter that another prominent member of the proposed union (Kickstarter United) has also been asked to leave the firm by Kickstarter. These actions have of course garnered a lot of negative reactions for the company, with people tagging Kickstarter as a ‘union buster’. https://twitter.com/GJenkins310114/status/1172765408287428608 https://twitter.com/nickblackford/status/1172432658372055041 Moore is now urging fellow Kickstarter staff members to unionize and sign petitions for exercising their right to a fair and equitable workplace. https://twitter.com/taylordotbiz/status/1172260259823534080 Following all the furore, Kickstarter has said in a statement to engadget that if an employee fails to achieve the meeting expectations, they are bound to meet this treatment. The statement further reads, “Kickstarter has not fired or otherwise retaliated against anyone for union organizing. Obviously we knew how these terminations could be perceived. But it would be unfair to not hold these people to the same standards as the rest of our staff simply because they are union organizers.” Read Also: Perry Chen, Kickstarter CEO steps down from his role in the middle of employees’ Unionization efforts Per Slate, two days ago, the labor union working with employees at Kickstarter have filed a  charge with the National Labor Relations Board. The labor union have accused Kickstarter of wrongfully terminating Redwine and Moore. They have further alleged that “Kickstarter interfered with employees’ right to organize, since such firings could have a chilling effect on union efforts. Moore and Redwine are asking for back pay and to be reinstated to their positions. Next, the NLRB will ask the union to file an affidavit describing the charges, and the employer will have to respond.” https://twitter.com/NathanJRobinson/status/1173287284663345152 The labor union workers have called on Kickstarter project creators to express their solidarity with Kickstarter workers’ union and to condemn the company’s firings with a ‘Support of Unionizing Kickstarter Workers’ form. With 80 Kickstarter creators already signed up, the statement in support of the Kickstarter Union has already received lots of appreciation from the public on Twitter. One Kickstarter creator tweeted, “I owe my fine art career to Kickstarter. It is a site I love, and a site that changed my life. They need to recognize the union.  I signed this petition.” Kickstarter has not released a statement regarding the lawsuit yet. Unfair treatment of protesting workers have always been the norm Kickstarter has now joined the long list of firms like Google and Npm Inc who have opposed workers organizing to demand better treatment. For workers, unions act as negotiators who can talk/protest on their behalf against unlawful working conditions like non-cash compensation, harassment, unbiased hiring, and other unfair labor practices. On the other hand, the employers’ will to maximize the company’s profit at the cost of the employer’s salary or time are received with resistance from trade unions. With such conflicting goals, it comes as no surprise that organizations will attempt to remove the organizers behind mass protests, in order to avoid such movements taking concrete shapes such as trade unions. Another method used by firms is to issue demotion or an unplanned transfer, as experienced by Claire Stapleton and Meredith Whittaker, ex-Google employees. These developers played a major role in organizing an pan-Google Walkout in November last year, but had to quit Google eight months later, after facing retaliation for protesting against the company. Read Also: Meredith Whittaker, Google Walkout organizer, and AI ethics researcher is leaving the company, adding to its brain-drain woes over ethical concerns Read Also: Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management The organizers of the Google Walkout had laid five demands for change within the workplace. An end to Forced Arbitration in cases of harassment and discrimination A commitment to end pay and opportunity inequities A publicly disclosed sexual harassment transparency report A clear, uniform, and globally inclusive process for reporting sexual misconduct safely and anonymously Elevate the Chief Diversity Officer to answer directly to the CEO In April, NPM Inc, the company behind the widely used NPM JavaScript package repository dismissed 5 of its employees, allegedly for engaging in union organizing activities. The union organizers were protesting against the company’s profit prioritizing strategy at the expense of employees. According to a special report from The Register, the JavaScript package registry and NPM Inc were planning to fight union-busting complaints by firing staffers, rather than settling their claims. Following all the unrest in the company, the Npm Inc. co-founder and Chief data officer, Laurie Voss, quit the company, in July. Voss’ s resignation came third in line after Rebecca Turner, former core contributor who resigned in March and Kat Marchan, former CLI and community architect also resigned from NPM the same month. Voss stated in his blog that he supports unions and added, “As far as the labor dispute goes, I will say that I have always supported unions, I think they’re great, and at no point in my time at NPM did anybody come to me proposing a union,” he said. “If they had, I would have been in favor of it. The whole thing was a total surprise to me.” This can be mostly true as employees tend not to talk to the management in the fear of retaliation. Read Also: Npm Inc, after a third try, settles former employee claims, who were fired for being pro-union, The Register reports Why is tech workers’ unionizing, a tipping point? There has always been a debate around the need for unions in tech or rather if at all the industry needs one. Supporters of the latter are of the opinion that the tech industry does not need a union as this field is already doing well, in terms of money and safety for employees. Jeff Atwood, one of the co-founders of the stackoverflow.com responded to the Kickstarter story by commenting, “I will never understand the desire to "unionize" in tech. Why should rich people unionize? So they can get .. uh.. even more money?” Lack of Pay Equity A popular issue that employees, including those in tech, face is the pay equity issue, particularly for women. The current Equal Pay Act specifies that employers can’t differentiate salary based on gender (unless based on factors like seniority, merit, and work level). However, it was found that most  women were making far less money than male colleagues with the same experience and job titles. According to Vox, “women in the United States who work full time make, on an average, 82 cents for every dollar their male counterparts make.” After a series of attempts to make sure women and men are paid equally, Congress passed the Paycheck Fairness bill, in March this year. The Paycheck Fairness Act is intended to close all the loopholes in the current Equal Pay Act of 1963. It is now awaiting Senate approval post which the bill will become a law. Need for safe working conditions Job insecurity However, fair pay is not the only reason unions exist. Two months ago, Amazon workers protested against their employers on its Prime day, calling for a safe work environment. According to a report by Verge, an attorney for Amazon confirmed that hundreds of employees at the Baltimore facility were terminated within a year for failing to meet productivity rates. The workers complained that Amazon gamifies and makes the productivity goals dynamic for its workers, which becomes unrealistic to achieve. They also demanded the company to take action against ease quotas, and make more temp employees permanent. Toxic work culture In April this year, Riot games employees had demonstrated a walkout in protest of the company’s sexist culture and lack of diversity. Riot games was put in the spotlight after many complaints by Riot workers on grounds of sexual harassment and discrimination faced at their workspace. In contrast the complaints and public outcry against Riot games did not fall on deaf ears, as Riot games settled the class-action lawsuit filed by their workers. The CEO of Riot Games said, “We are grateful for every Rioter who has come forward with their concerns and believe this resolution is fair for everyone involved.” Unrealistic target demands and burnout In January, the 2019 Game Developers Conference survey revealed that nearly 50% of game developers believed that the game industry workers should unionize to fight against unreal target demands and long working hours. Following which in February, the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) published an open letter to game developers, persuading them to unionize and voice support for fair treatment at work. The letter pointed out that workers are asked to submit to unrealistic targets with horrible work conditions, job instability, and inadequate pay. Forced arbitration keeps the status quo The Google Walkout not only pushed Google to take a stand against sexual assault but also inspired other companies to take the right steps in case of sensitive issues. Post the Google Walkout, Sundar Pichai, Chief Executive Officer of Google Inc. addressed these demands and listed some major changes that will be incorporated in Google. Out of all the demands, forced arbitration was one of the most criticized and protested policies by the Google workers. Later in January, Google workers even launched an industry-wide awareness campaign to fight against it. The workers stated that the implementation of forced arbitration policy has grown significantly in the past seven years, with 65% of the companies consisting of 1,000 or more employees, now having mandatory arbitration procedures. Relenting to the demands, Google finally ended its forced arbitration policy for all it’s employees in February. According to the forced arbitration policy, the employee is required to waive their right to sue, to participate in a class action lawsuit, or to appeal. Following suit, Facebook also changed its policy of forced arbitration which required the employees to settle sexual harassment claims in private. This allows Facebook employees to take any of their sexual harassment complaints to a court of law. It is safe to say that unionization in tech is of utmost importance, as only when a large group of protestors revolt together, the company is forced to change course, which otherwise would have gone unchallenged. Only structural changes such as worker unions that challenge the current power dynamics  can make a company’s “if they don’t like it they can leave” mentality change in the tech industry. With the tech industry playing fast and loose with laws and regulations, workers may be the only guardrail in keeping them accountable. Unions, for all their limitations and shortcomings, help protect workers against company retaliation. Unions are likely not the end goal, but a means to achieve a fairer and inclusive workplace, a more responsible business entity and a saner marketplace. Latest news in Tech How Artificial Intelligence and machine learning can help address the ongoing climate change A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports The CAP Theorem in practice: The consistency vs. availability trade-off in distributed databases
Read more
  • 0
  • 0
  • 17518

article-image-why-companies-that-dont-invest-in-technology-training-cant-compete
Richard Gall
17 Sep 2019
8 min read
Save for later

Why companies that don’t invest in technology training can’t compete

Richard Gall
17 Sep 2019
8 min read
Here’s the bad news: the technology skills gap is seriously harming many businesses. Without people who can help drive innovation, companies are going to struggle to compete on the global stage. Fortunately, there’s also some good news: we can all do something about it. Although it might seem that the tech skills gap is an issue that’s too big for employers to properly tackle on their own, by investing time and energy in technology training, employers can ensure that they have the skills and knowledge within their team to continue to innovate and build solutions to the conveyor belt of problems. Businesses ignore technology training at their peril. Sure, a great recruitment team is important, as are industry and academic connections. But without a serious focus on upskilling employees - software developers or otherwise - many companies won’t be able to compete. It’s not just about specific skills; it’s also about building a culture that is forward thinking and curious. A working environment that is committed to exploring new ways to solve problems. If companies aren’t serious about building this type of culture through technology training, they’re likely to run in a number of critical problems. These could have a long term impact on growth and business success. If you don’t invest in training and resources it's harder to retain employees Something that I find amusing is seeing business leaders complain about the dearth of talent in their industry or region, while also failing to engage or develop the talent they already have at their disposal. It sounds obvious, but it’s something that is often missed in conversations about the skills gap. If it’s so hard to find the talent you need, make sure you train to retain. The consequences could be significant. If you have talented employees that are willing and eager to learn but you’re not prepared to support them, with either resources or time, you can bet that they’ll be looking for other jobs. For employees, a company that doesn’t invest in their skills and knowledge is demoralising in multiple ways - on the one hand it signals that the company doesn’t trust or value them, while on the other it leaves them doing work that doesn’t push or engage them. If someone can’t do something, your first thought shouldn’t be to turn to a recruitment manager - you should instead be asking if someone in your team can do it. If they can’t, then ask yourself if they could with some training. This doesn’t mean that your HR problems can always be solved internally, but it does mean that if you take technology training seriously you can solve it much faster, and maybe even cheaper. Indeed, there’s another important point here. There’s sometimes an assumption that the right person is out there who can do the job you need. Someone with a perfect background, all the skills, all the knowledge and expertise you need. This person almost certainly doesn’t exist. And, while you were dreaming of this perfect employee, your other developers all got jobs elsewhere. That leaves you in a tricky position. All the initiatives you wanted to achieve in the second half of the year now have to be pushed back. Good technology training and team-based resources can attract talent Although it’s important to invest in tech training and resources to develop and retain employees, a considered approach to technology training can also be useful in attracting people to your company. Think of companies like Google and Facebook. One of the reasons they’re so attractive to many talented tech professionals (as well as the impressive salaries…) is that they offer so much scope to solve interesting problems and work on complex projects. Now, you probably can’t offer either the inflated salaries or the possibility of working on industry defining technology, but by making technology training a critical part of your ‘employer brand’, you’ll find it much easier to make talented tech professionals pay attention to you. It’s also a way of showing prospective employees that you’re serious about their personal development, and that you understand just how important learning is in tech. There are a number of ways that this could play out - from highlighting the resources you make available and give software engineering teams access to, to qualifications, and even just the opportunity to learn for a set period every week. Ultimately, by having a clear and concerted tech learning initiative, you can both motivate and develop your existing employees while also attracting new ones. This means that you’ll have a healthy developer culture, and be viewed as a developer and tech-focused organization. Read next: 5 barriers to learning and technology training for small software development teams Organizations that invest in tech training and resources encourage employee flexibility and curiosity To follow on from the point above, to successfully leverage technology, you need to build a culture where change is embraced. You need to value curiosity as a means of uncovering innovative solutions to problems. The way to encourage this is through technology training. Failing to provide even a basic level of support is not only unhelpful in a practical sense, it also sends a negative message to your employees: change doesn’t really matter, we always do things this way, so why does anyone need to spend time learning new things? By showing your engineering employees that yes, their curiosity is important, you can ensure that you have people who are taking full ownership of problems. Instead of people saying that’s how things have always been done, employees will be asking if there's another way. Ultimately that should lead to improved productivity and better outcomes. In turn, this means that your tech employees will become more flexible and adaptable without even realising it. Without technology training and high-quality resources, businesses can’t evolve quickly This brings us neatly to the next point. If your employees aren’t flexible and haven’t been encouraged to explore and learn new technologies and topics the business will suffer. The speed at which it can adapt to changes in the market and deliver new products will harm the bottom line. All too often we think about business agility in an abstract way. But if you aren’t able to support your employees in embracing change and learning new skills, you simply can’t have business agility. Of course, you might imagine a counterargument popping up here - couldn’t individual development harm business goals? True, most offices aren’t study spaces. But while business objectives and deadlines should always be the priority, if learning is constantly pushed to the bottom of the pile, you will find that the business, and your team, are going to hit a brick wall. The status quo is rarely sustainable. There will always be a point at which something won’t work, or something becomes out of date. If everyone is looking at immediate problems and tasks, they’ll never see the bigger picture. Technology training and learning resources can help to remove silos and improve collaboration Silos are an inevitable reality in many modern businesses. This is particularly true in tech teams. Often they arise because of good intentions. Splitting people up and getting them to focus on different things isn’t exactly the worst idea in the world, right? Today, however, silos are significant barriers to innovation and change. Change happens when people break out of their silos and share knowledge. It happens when people develop new ways of working that are free from traditional constraints. This isn’t something that’s easy to accomplish. It requires a lot of work on the culture and processes of a team. But one element that’s often overlooked when it comes to tackling silos is training and resources. If silos exist because people feel comfortable with specialization and focus, by offering wide-reaching resources and training materials you will be going a long way to helping your employees to get outside of their comfort zone. Indeed, in some respects we’re not talking about sustained training courses. Instead, it’s just as valuable - and more cost-effective - to provide people with resources that allow them to share their frame of reference. That might feel insignificant in the face of technological change beyond the business, and strategic visions inside it. However, it’s impossible to overestimate the importance of a shared language or understanding. It’s a critical part of getting people to work together, and getting people to think with more clarity about how they work, and the tools they use. Make sure your team has the resources they need to solve problems quickly and keep their skills up to date. Learn more about Packt for Teams here.
Read more
  • 0
  • 0
  • 38744

article-image-4-important-business-intelligence-considerations-for-the-rest-of-2019
Richard Gall
16 Sep 2019
7 min read
Save for later

4 important business intelligence considerations for the rest of 2019

Richard Gall
16 Sep 2019
7 min read
Business intelligence occupies a strange position, often overshadowed by fields like data science and machine learning. But it remains a critical aspect of modern business - indeed, the less attention the world appears to pay to it, the more it is becoming embedded in modern businesses. Where analytics and dashboards once felt like a shiny and exciting interruption in our professional lives, today it is merely the norm. But with business intelligence almost baked into the day to day routines and activities of many individuals, teams, and organizations, what does this actually mean in practice. For as much as we’d like to think that we’re all data-driven now, the reality is that there’s much we can do to use data more effectively. Research confirms that data-driven initiatives often fail - so with that in mind here’s what’s important when it comes to business intelligence in 2019. Popular business intelligence eBooks and videos Oracle Business Intelligence Enterprise Edition 12c - Second Edition Microsoft Power BI Quick Start Guide Implementing Business Intelligence with SQL Server 2019 [Video] Hands-On Business Intelligence with Qlik Sense Hands-On Dashboard Development with QlikView Getting the balance between self-service business intelligence and centralization Self-service business intelligence is one of the biggest trends to emerge in the last two years. In practice, this means that a diverse range of stakeholders (marketers and product managers for example) have access to analytics tools. They’re no longer purely the preserve of data scientists and analysts. Self-service BI makes a lot of sense in the context of today’s data rich and data-driven environment. The best way to empower team members to actually use data is to remove any bottlenecks (like a centralized data team) and allow them to go directly to the data and tools they need to make decisions. In essence, self-service business intelligence solutions are a step towards the democratization of data. However, while the notion of democratizing data sounds like a noble cause, the reality is a little more complex. There are a number of different issues that make self-service BI a challenging thing to get right. One of the biggest pain points, for example, are the skill gaps of teams using these tools. Although self-service BI should make using data easy for team members, even the most user-friendly dashboards need a level of data literacy to be useful. Read next: What are the limits of self-service BI? Many analytics products are being developed with this problem in mind. But it’s still hard to get around - you don’t, after all, want to sacrifice the richness of data for simplicity and accessibility. Another problem is the messiness of data itself - and this ultimately points to one of the paradoxes of self-service BI. You need strong alignment - centralization even - if you’re to ensure true democratization. The answer to all this isn’t to get tied up in decentralization or centralization. Instead, what’s important is striking a balance between the two. Decentralization needs centralization - there needs to be strong governance and clarity over what data exists, how it’s used, how it’s accessed - someone needs to be accountable for that for decentralized, self-service BI to actually work. Read next: How Qlik Sense is driving self-service Business Intelligence Self-service business intelligence: recommended viewing Power BI Masterclass - Beginners to Advanced [Video] Data storytelling that makes an impact Data storytelling is a phrase that’s used too much without real consideration as to what it means or how it can be done. Indeed, all too often it’s used to refer to stylish graphs and visualizations. And yes, stylish graphs and data visualizations are part of data storytelling, but you can’t just expect some nice graphics to communicate in depth data insights to your colleagues and senior management. To do data storytelling well, you need to establish a clear sense of objectives and goals. By that I’m not referring only to your goals, but also those of the people around you. It goes without saying that data and insight needs context, but what that context should be, exactly, is often the hard part - objectives and aims are perhaps the straightforward way of establishing that context and ensuring your insights are able to establish the scope of a problem and propose a way forward. Data storytelling can only really make an impact if you are able to strike a balance between centralization and self-service. Stakeholders that use self-service need confidence that everything they need is both available and accurate - this can only really be ensured by a centralized team of data scientists, architects, and analysts. Data storytelling: recommend viewing Data Storytelling with Qlik Sense [Video] Data Storytelling with Power BI [Video] The impact of cloud It’s impossible to properly appreciate the extent to which cloud is changing the data landscape. Not only is it easier than ever to store and process data, it’s also easy to do different things with it. This means that it’s now possible to do machine learning, or artificial intelligence projects with relative ease (the word relative being important, of course). For business intelligence, this means there needs to be a clear strategy that joins together every piece of the puzzle, from data collection to analysis. This means there needs to be buy-in and input from stakeholders before a solution is purchased - or built - and then the solution needs to be developed with every individual use case properly understood and supported. Indeed, this requires a combination of business acumen, soft skills, and technical expertise. A large amount of this will rest on the shoulders of an organization’s technical leadership team, but it’s also worth pointing out that those in other departments still have a part to play. If stakeholders are unable to present a clear vision of what their needs and goals are it’s highly likely that the advantages of cloud will pass them by when it comes to business intelligence. Cloud and business intelligence: recommended viewing Going beyond Dashboards with IBM Cognos Analytics [Video] Business intelligence ethics Ethics has become a huge issue for organizations over the last couple of years. With the Cambridge Analytica scandal placing the spotlight on how companies use customer data, and GDPR forcing organizations to take a new approach to (European) user data, it’s undoubtedly the case that ethical considerations have added a new dimension to business intelligence. But what does this actually mean in practice? Ethics manifests itself in numerous ways in business intelligence. Perhaps the most obvious is data collection - do you have the right to use someone’s data in a certain way? Sometimes the law will make it clear. But other times it will require individuals to exercise judgment and be sensitive to the issues that could arise. But there are other ways in which individuals and organizations need to think about ethics. Being data-driven is great, especially if you can approach insight in a way that is actionable and proactive. But at the same time it’s vital that business intelligence isn’t just seen as a replacement for human intelligence. Indeed, this is true not just in an ethical sense, but also in terms of sound strategic thinking. Business intelligence without human insight and judgment is really just the opposite of intelligence. Conclusion: business intelligence needs organizational alignment and buy-in There are many issues that have been slowly emerging in the business intelligence world for the last half a decade. This might make things feel confusing, but in actual fact it underlines the very nature of the challenges organizations, leadership teams, and engineers face when it comes to business intelligence. Essentially, doing business intelligence well requires you - and those around you - to tie all these different elements. It's certainly not straightforward, but with focus and a clarity of thought, it's possible to build a really effective BI program that can fulfil organizational needs well into the future.
Read more
  • 0
  • 0
  • 37845

article-image-how-artificial-intelligence-and-machine-learning-can-help-us-tackle-the-climate-change-emergency
Vincy Davis
16 Sep 2019
14 min read
Save for later

How artificial intelligence and machine learning can help us tackle the climate change emergency

Vincy Davis
16 Sep 2019
14 min read
“I don’t want you to be hopeful. I want you to panic. I want you to feel the fear I feel every day. And then I want you to act on changing the climate”- Greta Thunberg Greta Thunberg is a 16-year-old Swedish schoolgirl, who is famously called as a climate change warrior. She has started an international youth movement against climate change and has been nominated as a candidate for the Nobel Peace Prize 2019 for climate activism. According to a recent report by the Intergovernmental Panel (IPCC), climate change is seen as the top global threat by many countries. The effects of climate change is going to make 1 million species go extinct, warns a UN report. The Earth’s rising temperatures are fueling longer and hotter heat waves, more frequent droughts, heavier rainfall, and more powerful hurricanes. Antarctica is breaking. Indonesia, the world's 4th most populous country, just shifted its capital from Jakarta because it's sinking. Singapore's worried investments are moving away. Last year, Europe experienced an  'extreme year' for unusual weather events. After a couple of months of extremely cold weather, heat and drought plagued spring and summer with temperatures well above average in most of the northern and western areas. The UK Parliament has declared ‘climate change emergency’ after a series of intense protests earlier this month. More than 1,200 people were killed across South Asia due to heavy monsoon rains and intense flooding (in some places it was the worst in nearly 30 years). The CampFire, in November 2018, was the deadliest and most destructive in California’s history, causing the death of at least 85 people and destroying about 14,000 homes. Australia’s most populous state New South Wales suffered from an intense drought in 2018. According to a report released by the UN last year, there are “Only 11 Years Left to Prevent Irreversible Damage from Climate Change”.  Addressing climate change: How ARTIFICIAL INTELLIGENCE (AI) can help? As seen above, environmental impacts due to climate changes are clear, the list is vast and depressing. It is important to address climate change issues as they play a key role in the workings of a natural ecosystem like change in the nature of global rainfall, diminishing ice-sheets, and other factors on which the human economy and the civilization depends on. With the help of Artificial Intelligence (AI), we can increase our probability of becoming efficient, or at least slow down the damage caused by climate change. In the recently held ICLR 2019 (International Conference on Learning Representations), Emily Shuckburgh, a Climate scientist and deputy head of the Polar Oceans team at the British Antarctic Survey highlighted the need of actionable information on climate risk. It elaborated on how we can monitor, treat and find a solution to the climate changes using machine learning. Also mentioned is, how AI can synthesize and interpolate different datasets within a framework that will allow easy interrogation by users and near-real time ingestion of new data. According to MIT tech review on climate changes, there are three approaches to address climate change: mitigation, navigation and suffering. Technologies generally concentrate on mitigation, but it’s high time that we give more focus to the other two approaches. In a catastrophically altered world, it would be necessary to concentrate on adaptation and suffering. This review states that, the mitigation steps have had almost no help in preserving fossil fuels. Thus it is important for us to learn to adapt to these changes. Building predictive models by relying on masses of data will also help in providing a better idea of how bad the effect of a disaster can be and help us to visualize the suffering. By implementing Artificial Intelligence in these approaches, it will help not only to reduce the causes but also to adapt to these climate changes. Using AI, we can predict the accurate status of climate change, which will help create better futuristic climate models. These predictions can be used to identify our biggest vulnerabilities and risk zones. This will help us to respond in a better way to the impact of climate change such as hurricanes, rising sea levels, and higher temperatures. Let’s see how Artificial Intelligence is being used in all the three approaches - Mitigation: Reducing the severity of climate change Looking at the extreme climatic changes, many researchers have started exploring how AI can step-in to reduce the effects of climate change. These include ways to reduce greenhouse gas emissions or enhance the removal of these gases from the atmosphere. In view of consuming less energy, there has been an active increase in technologies to use energy smartly. One such startup is the ‘Verv’. It is an intelligent IoT hub which uses patented AI technology to give users the authority to take control of their energy usage. This home energy system provides you with information about your home appliances and other electricity data directly from the mains, which helps to reduce your electricity bills and lower your carbon footprint. ‘Igloo Energy’ is another system which helps customers use energy efficiently and save money. It uses smart meters to analyse behavioural, property occupancy and surrounding environmental data inputs to lower the energy consumption of users. ‘Nnergix’ is a weather analytics startup focused in the renewable energy industry. It collects weather and energy data from multiple sources from the industry in order to feed machine learning based algorithms to run several analytic solutions with the main goal to help any system become more efficient during operations and reduce costs. Recently, Google announced that by using Artificial Intelligence, it’s wind energy has boosted up to 20 percent. A neural network is trained on the widely available weather forecasts and historical turbine data. The DeepMind system is configured to predict the wind power output 36 hours ahead of actual generation. The model then recommends to make hourly delivery commitments to the power grid a full day in advance, based on the predictions. Large industrial systems are the cause of 54% of global energy consumption. This high-level of energy consumption is the primary contributor to greenhouse gas emissions. In 2016, Google’s ‘DeepMind’ was able to reduce the energy required to cool Google Data Centers by 30%. Initially, the team made a general purpose learning algorithm which was developed into a full-fledged AI system with features including continuous monitoring and human override. Just last year, Google has put an AI system in charge of keeping its data centers cool. Every five minutes, AI pulls a snapshot of the data center cooling system from thousands of sensors. This data is fed into deep neural networks, which predicts how different choices will affect future energy consumption. The neural networks are trained to maintain the future PUE (Power Usage Effectiveness) and to predict the future temperature and pressure of the data centre over the next hour, to ensure that any tweaks did not take the data center beyond its operating limits. Google has found that the machine learning systems were able to consistently achieve a 30 percent reduction in the amount of energy used for cooling, the equivalent of a 15 percent reduction in overall PUE. As seen, there are many companies trying to reduce the severity of climate change. Navigation: Adapting to current conditions Though there have been brave initiatives to reduce the causes of climate change, they have failed to show any major results. This could be due to the increasing demand for energy resources, which is expected to grow immensely globally. It is now necessary to concentrate more on adapting to climate change, as we are in a state where it is almost impossible to undo its effects. Thus, it is better to learn and navigate through this climate change. A startup in Berlin, called ‘GreenAdapt’ has created a software using AI, which can tackle local impacts induced both by gradual changes and changes of extreme weather events such as storms. It identifies  effects of climatic changes and proposes adequate adaptation measures. Another startup called ‘Zuli’ has a smartplug that reduces energy use. It contains sensors that can estimate energy usage, wirelessly communicate with your smartphone, and accurately sense your location. A firm called ‘Gridcure’ provides real-time analytics and insights for energy and utilities. It helps power companies recover losses and boost revenue by operating more efficiently. It also helps them provide better delivery to consumers, big reductions in energy waste, and increased adoption of clean technologies. With mitigation and navigation being pursued enough, let’s see how firms are working on futuristic goals. Visualization: Predicting the future It is also equally important to visualize accurate climate models, which will help humans to cope up with the aftereffects of climate change. Climate models are mathematical representations of the Earth's climate system, which takes into account humidity, temperature, air pressure, wind speed and direction, as well as cloud cover and predict future weather conditions. This can help in tackling disasters. It’s also imperative to fervently increase our information on global climate changes which will help to create more accurate models. A startup modeling firm called ‘Jupiter’ is trying to better the accuracy of predictions regarding climate changes. It makes physics-based and Artificial Intelligence-powered decisions using data from millions of ground-based and orbital sensors. Another firm, ‘BioCarbon Engineering’ plans to use drones which will fly over potentially suitable areas and compile 3D maps. Then, it will scatter small containers over the best areas containing fertilized seeds as well as nutrients and moisture gel. In this way, 36,000 trees can be planted every day in a way that is cheaper than other methods. After planting, drones will continue to monitor the germinating seeds and deliver further nutrients when necessary to ensure their healthy growth. This could help to absorb carbon dioxide from the atmosphere. Another initiative is by a ETH doctoral student at the Functional Materials Laboratory, who has developed a cooling curtain made of a porous triple-layer membrane as an alternative to electrically powered air conditioning. In 2017, Microsoft came up with ‘AI for Earth’ initiative, which primarily focuses on climate conservation, biodiversity, etc. AI for Earth awards grants to projects that use artificial intelligence to address critical areas that are vital for building a sustainable future. Microsoft is also using its cloud computing service Azure, to give computing resources to scientists working on environmental sustainability programs. Intel has deployed Artificial Intelligence-equipped Drones in Costa Rica to construct models of the forest terrain and calculate the amount of carbon being stored based on tree height, health, biomass, and other factors. The collected data about carbon capture can enhance management and conservation efforts, support scientific research projects on forest health and sustainability, and enable many other kinds of applications. The ‘Green Horizon Project from IBM’ analyzes environmental data and predicts pollution as well as tests scenarios that involve pollution-reducing tactics. IBM's Deep Thunder’ group works with research centers in Brazil and India to accurately predict flooding and potential mudslides due to the severe storms. As seen above, there are many organizations and companies ranging from startups to big tech who have understood the adverse effects of climate change and are taking steps to address them. However, there are certain challenges/limitations acting as a barrier for these systems to be successful. What do big tech firms and startups lack? Though many big tech and influential companies boast of immense contribution to fighting climate change, there have been instances where these firms get into lucrative deals with oil companies. Just last year, Amazon, Google and Microsoft struck deals with oil companies to provide cloud, automation, and AI services to them. These deals were published openly by Gizmodo and yet didn’t attract much criticism. This trend of powerful companies venturing into oil businesses even after knowing the effects of dangerous climate changes is depressing. Last year, Amazon quietly launched the ‘Amazon Sustainability Data Initiative’.It helps researchers store many weather observations and forecasts, satellite images and metrics about oceans, air quality so that they can be used for modeling and analysis. This encourages organizations to use the data to make decisions which will encourage sustainable development. This year, Amazon has expanded its vision by announcing ‘Shipment Zero’ to make all Amazon shipments with 50% net zero by 2030, with a wider aim to make it 100% in the future. However, Shipment Zero only commits to net carbon reductions. Recently, Amazon ordered 20,000 diesel vans whose emissions will need to be offset with carbon credits. Offsets can entail forest management policies that displace indigenous communities, and they do nothing to reduce diesel pollution which disproportionately harms communities of color. Some in the industry expressed disappointment that Amazon’s order is for 20,000 diesel vans — not a single electric vehicle. In April, Over 4,520 Amazon employees organized against Amazon’s continued profiting from climate devastation. They signed an open letter addressed to Jeff Bezos and Amazon board of directors asking for a company-wide action plan to address climate change and an end to the company’s reliance on dirty energy resources. Recently, Microsoft doubled its internal carbon fee to $15 per metric ton on all carbon emissions. The funds from this higher fee will maintain Microsoft’s carbon neutrality and help meet their sustainability goals. On the other hand, Microsoft is also two years into a seven-year deal—rumored to be worth over a billion dollars—to help Chevron, one of the world’s largest oil companies, better extract and distribute oil.  Microsoft Azure has also partnered with Equinor, a multinational energy company to provide data services in a deal worth hundreds of millions of dollars. Instead of gaining profit from these deals, Microsoft could have taken a stand by ending partnerships with these fossil fuel companies which accelerate oil and gas exploration and extraction. With respect to smaller firms, often it is difficult for a climate-focused conservative startup to survive due to the dearth of finance. Many such organizations are small and relatively weak as they struggle to rise in a sector with little apathy and lack of steady financing. Also startups being non-famous, it is difficult for them to market their ideas and convince people to try their systems. They always need a commercial boost to find more takers. Pitfalls of using Artificial Intelligence for climate preservation Though AI has enormous potential to help us create a sustainable future, it is only part of a bigger set of tools and pathways needed to reach the goal. It also comes with its own limitations and side effects. An inability to control malicious AI can cause unexpected outcomes. Hackers can use AI to develop smart malware that interfere with early warnings, enable bad actors to control energy, transportation or other critical systems and could also get them access to sensitive data. This could result in unexpected outcomes at crucial output points for AI systems. AI bias, is another dangerous phenomena, that can give an irrational result to a working system. Bias in an AI system mainly occurs in the data or in the system’s algorithmic model which may produce incorrect results in its functions and security. [dropcap]M[/dropcap]ore importantly, we should not rely on Artificial Intelligence alone to fight the effects of climate change. Our focus should be to work on the causes of climate change and try to minimize it, from an individual level. Even governments in every country must contribute, by initiating “climate policies” which will help its citizens in the long run. One vital task would be to implement quick responses in case of climate emergencies. Like the recent case of Odisha storms, the pinpoint accuracy by the Indian weather association helped to move millions of people to safe spaces, resulting in minimum casualties. Next up in Climate Amazon employees plan to walkout for climate change during the Sept 20th Global Climate Strike Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?
Read more
  • 0
  • 0
  • 46238

article-image-a-new-stuxnet-level-vulnerability-named-simjacker-used-to-secretly-spy-over-mobile-phones-in-multiple-countries-for-over-2-years-adaptive-mobile-security-reports
Savia Lobo
13 Sep 2019
6 min read
Save for later

A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports

Savia Lobo
13 Sep 2019
6 min read
Updated: On September 27, a few researchers from the Security Research Labs (SRLabs) released five key research findings based on the extent of Simjacker and how one can understand whether is SIM is vulnerable to such an exploit. Yesterday, Adaptive Mobile Security made a breakthrough announcement revealing a new vulnerability which the firm calls Simjacker has been used by attackers to spy over mobile phones. Researchers at Adaptive Mobile Security believe the vulnerability has been exploited for at least the last 2 years “by a highly sophisticated threat actor in multiple countries, primarily for the purposes of surveillance.” They further added that the Simjacker vulnerability “is a huge jump in complexity and sophistication compared to attacks previously seen over mobile core networks. It represents a considerable escalation in the skillset and abilities of attackers seeking to exploit mobile networks.” Also Read: 25 million Android devices infected with ‘Agent Smith’, a new mobile malware How Simjacker attack works and why it is a grave threat In the Simjacker attack, an SMS that contains a specific spyware-like code is sent to a victim’s mobile phone. This SMS when received, instructs the UICC (SIM Card) within the phone to ‘take over’ the mobile phone, in order to retrieve and perform sensitive commands. “During the attack, the user is completely unaware that they received the SMS with the Simjacker Attack message, that information was retrieved, and that it was sent outwards in the Data Message SMS - there is no indication in any SMS inbox or outbox,” the researchers mention on their official blog post. Source: Adaptive Mobile Security The Simjacker attack relies on the S@T(SIMalliance Toolbox ‘pronounced as sat’) browser software as an execution environment. The S@T browser, an application specified by the SIMalliance, can be installed on different UICC (SIM cards), including eSIMs. The S@T browser software is quite old and unpopular with an initial aim to enable services such as getting your account balance through the SIM card. The software specifications have not been updated since 2009 and have been superseded by many other technologies since then. Researchers say they have observed the “S@T protocol being used by mobile operators in at least 30 countries whose cumulative population adds up to over a billion people, so a sizable amount of people are potentially affected. It is also highly likely that additional countries have mobile operators that continue to use the technology on specific SIM cards.” Simjacker attack is a next-gen SMS attack Simjacker attack is unique. Previous SMS malware involved sending links to malware. However, the Simjacker Attack Message carries a complete malware payload, specifically spyware with instructions for the SIM card to execute the attack. Simjacker attack can do more than simply tracking the user’s location and user’s personal data. By modifying the attack message, the attacker could instruct the UICC to execute a range of other attacks. This is because the same method allows an attacker to have complete access to the STK command set including commands such as launch browser, send data, set up a call, and much more. Also Read: Using deep learning methods to detect malware in Android Applications The researchers used these commands in their own tests and were successfully able to make targeted handsets open up web browsers, ring other phones, send text messages and so on. They further highlighted other purposes this attack could be used for: Mis-information (e.g. by sending SMS/MMS messages with attacker-controlled content) Fraud (e.g. by dialling premium rate numbers), Espionage (as well as the location retrieving attack an attacked device it could function as a listening device, by ringing a number), Malware spreading (by forcing a browser to open a web page with malware located on it) Denial of service (e.g by disabling the SIM card) Information retrieval (retrieve other information like language, radio type, battery level etc.) The researchers highlight another benefit of the Simjacker attack for the attackers: many of its attacks seem to work independent of handset types, as the vulnerability is dependent on the software on the UICC and not the device. Adaptive Mobile says behind the Simjacker attack is a “specific private company that works with governments to monitor individuals.” This company also has extensive access to the SS7 and Diameter core network. Researchers said that in one country, roughly 100-150 specific individual phone numbers being targeted per day via Simjacker attacks. Also, a few phone numbers had been tracked a hundred times over a 7-day period, suggesting they belonged to high-value targets. Source: Adaptive Mobile Security The researchers added that they have been “working with our own mobile operator customers to block these attacks, and we are grateful for their assistance in helping detect this activity.” They said they have also communicated to the GSM Association – the trade body representing the mobile operator community - the existence of this vulnerability. This vulnerability has been managed through the GSMA CVD program, allowing information to be shared throughout the mobile community. “Information was also shared to the SIM alliance, a trade body representing the main SIM Card/UICC manufacturers and they have made new security recommendations for the S@T Browser technology,” the researchers said. “The Simjacker exploit represents a huge, nearly Stuxnet-like, leap in complexity from previous SMS or SS7/Diameter attacks, and show us that the range and possibility of attacks on core networks are more complex than we could have imagined in the past,” the blog mentions. The Adaptive Mobile Security team will present more details about the Simjacker attack in a presentation at Virus Bulletin Conference, London, on 3rd October 2019. https://twitter.com/drogersuk/status/1172194836985913344 https://twitter.com/campuscodi/status/1172141255322689537   To know more about the Simjacker attack in detail, read Adaptive Mobile’s official blog post. SRLabs researchers release protection tools against Simjacker and other SIM-based attacks On September 27, a few researchers from the Security Research Labs (SRLabs) released five key findings based on the extent of Simjacker and how one can understand whether is SIM is vulnerable to such an exploit. The researchers have highlighted five key findings in their research report and also provided an FAQ for users to implement necessary measures. Following are the five key research findings the SRLabs researchers mention: Around 6% of 800 tested SIM cards in recent years were vulnerable to Simjacker A second, previously unreported, vulnerability affects an additional 3.5% of SIM cards The tool SIMtester  provides a simple way to check any SIM card for both vulnerabilities (and for a range of other issues reported in 2013) The SnoopSnitch Android app warns users about binary SMS attacks including Simjacker since 2014. (Attack alerting requires a rooted Android phone with Qualcomm chipset.) A few Simjacker attacks have been reported since 2016 by the thousands of SnoopSnitch users that actively contribute data To know about these key findings by SRLabs' researchers in detail, read the official report. Other interesting news in Security Endpoint protection, hardening, and containment strategies for ransomware attack protection: CISA recommended FireEye report Highlights Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries
Read more
  • 0
  • 0
  • 27177
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-the-cap-theorem-in-practice-the-consistency-vs-availability-trade-off-in-distributed-databases
Richard Gall
12 Sep 2019
7 min read
Save for later

The CAP Theorem in practice: The consistency vs. availability trade-off in distributed databases

Richard Gall
12 Sep 2019
7 min read
When you choose a database you are making a design decision. One of the best frameworks for understanding what this means in practice is the CAP Theorem. What is the CAP Theorem? The CAP Theorem, developed by computer scientist Eric Brewer in the late nineties, states that databases can only ever fulfil two out of three elements: Consistency - that reads are always up to date, which means any client making a request to the database will get the same view of data. Availability - database requests always receive a response (when valid). Partition tolerance - that a network fault doesn’t prevent messaging between nodes. In the context of distributed (NoSQL) databases, this means there is always going to be a trade-off between consistency and availability. This is because distributed systems are always necessarily partition tolerant (ie. it simply wouldn’t be a distributed database if it wasn’t partition tolerant.) Read next: Different types of NoSQL databases and when to use them How do you use the CAP Theorem when making database decisions? Although the CAP Theorem can feel quite abstract, it has practical, real-world consequences. From both a technical and business perspective the trade-offs will lead you to some very important questions. There are no right answers. Ultimately it will be all about the context in which your database is operating, the needs of the business, and the expectations and needs of users. You will have to consider things like: Is it important to avoid throwing up errors in the client? Or are we willing to sacrifice the visible user experience to ensure consistency? Is consistency an actual important part of the user’s experience Or can we actually do what we want with a relational database and avoid the need for partition tolerance altogether? As you can see, these are ultimately user experience questions. To properly understand those, you need to be sensitive to the overall goals of the project, and, as said above, the context in which your database solution is operating. (Eg. Is it powering an internal analytics dashboard? Or is it supporting a widely used external-facing website or application?) And, as the final bullet point highlights, it’s always worth considering whether the consistency v availability trade-off should matter at all. Avoid the temptation to think a complex database solution will always be better when a simple, more traditional solution will do the job. Of course, it’s important to note that systems that aren’t partition tolerant are a single point of failure in a system. That introduces the potential for unreliability. Prioritizing consistency in a distributed database It’s possible to get into a lot of technical detail when talking about consistency and availability, but at a really fundamental level the principle is straightforward: you need consistency (or what is called a CP database) if the data in the database must always be up to date and aligned, even in the instance of a network failure (eg. the partitioned nodes are unable to communicate with one another for whatever reason). Particular use cases where you would prioritize consistency is when you need multiple clients to have the same view of the data. For example, where you’re dealing with financial information, personal information, using a database that gives you consistency and confidence that data you are looking at is up to date in a situation where the network is unreliable or fails. Examples of CP databases MongoDB Learning MongoDB 4 [Video] MongoDB 4 Quick Start Guide MongoDB, Express, Angular, and Node.js Fundamentals Redis Build Complex Express Sites with Redis and Socket.io [Video] Learning Redis HBase Learn by Example : HBase - The Hadoop Database [Video] HBase Design Patterns Prioritizing availability in a distributed database Availability is essential when data accumulation is a priority. Think here of things like behavioral data or user preferences. In scenarios like these, you will want to capture as much information as possible about what a user or customer is doing, but it isn’t critical that the database is constantly up to date. It simply just needs to be accessible and available even when network connections aren’t working. The growing demand for offline application use is also one reason why you might use a NoSQL database that prioritizes availability over consistency. Examples of AP databases Cassandra Learn Apache Cassandra in Just 2 Hours [Video] Mastering Apache Cassandra 3.x - Third Edition DynamoDB Managed NoSQL Database In The Cloud - Amazon AWS DynamoDB [Video] Hands-On Amazon DynamoDB for Developers [Video] Limitations and criticisms of CAP Theorem It’s worth noting that the CAP Theorem can pose problems. As with most things, in truth, things are a little more complicated. Even Eric Brewer is circumspect about the theorem, especially as what we expect from distributed databases. Back in 2012, twelve years after he first put his theorem into the world, he wrote that: “Although designers still need to choose between consistency and availability when partitions are present, there is an incredible range of flexibility for handling partitions and recovering from them. The modern CAP goal should be to maximize combinations of consistency and availability that make sense for the specific application. Such an approach incorporates plans for operation during a partition and for recovery afterward, thus helping designers think about CAP beyond its historically perceived limitations.” So, this means we must think about the trade-off between consistency and availability as a balancing act, rather than a binary design decision. Elsewhere, there have been more robust criticisms of CAP Theorem. Software engineer Martin Kleppmann, for example, pleaded Please stop calling databases CP or AP in 2015. In a blog post he argues that CAP Theorem only works if you adhere to specific definitions of consistency, availability, and partition tolerance. “If your use of words matches the precise definitions of the proof, then the CAP theorem applies to you," he writes. “But if you’re using some other notion of consistency or availability, you can’t expect the CAP theorem to still apply.” The consequences of this are much like those described in Brewer’s piece from 2012. You need to take a nuanced approach to database trade-offs in which you think them through on your own terms and up against your own needs. The PACELC Theorem One of the developments of this line of argument is an extension to the CAP Theorem: the PACELC Theorem. This moves beyond thinking about consistency and availability and instead places an emphasis on the trade-off between consistency and latency. The PACELC Theorem builds on the CAP Theorem (the ‘PAC’) and adds an else (the ‘E’). What this means is that while you need to choose between availability and consistency if communication between partitions has failed in a distributed system, even if things are running properly and there are no network issues, there is still going to be a trade-off between consistency and latency (the ‘LC’). Conclusion: Learn to align context with technical specs Although the CAP Theorem might seem somewhat outdated, it is valuable in providing a way to think about database architecture design. It not only forces engineers and architects to ask questions about what they want from the technologies they use, but it also forces them to think carefully about the requirements of a given project. What are the business goals? What are user expectations? The PACELC Theorem builds on CAP in an effective way. However, the most important thing about these frameworks is how they help you to think about your problems. Of course the CAP Theorem has limitations. Because it abstracts a problem it is necessarily going to lack nuance. There are going to be things it simplifies. It’s important, as Kleppmann reminds us - to be mindful of these nuances. But at the same time, we shouldn’t let an obsession with nuance and detail allow us to miss the bigger picture.
Read more
  • 0
  • 0
  • 60879

article-image-endpoint-protection-hardening-and-containment-strategies-for-ransomware-attack-protection-cisa-recommended-fireeye-report-highlights
Savia Lobo
12 Sep 2019
8 min read
Save for later

Endpoint protection, hardening, and containment strategies for ransomware attack protection: CISA recommended FireEye report Highlights

Savia Lobo
12 Sep 2019
8 min read
Last week, the Cybersecurity and Infrastructure Security Agency (CISA) shared some strategies with users and organizations to prevent, mitigate, and recover against ransomware. They said, “The Cybersecurity and Infrastructure Security Agency (CISA) has observed an increase in ransomware attacks across the Nation. Helping organizations protect themselves from ransomware is a chief priority for CISA.” They have also advised that those attacked by ransomware should report immediately to CISA, a local FBI Field Office, or a Secret Service Field Office. In the three resources shared, the first two include general awareness about what ransomware is and why it is a major threat, mitigations, and much more. The third resource is a FireEye report on ransomware protection and containment strategies. Also Read: Vulnerabilities in the Picture Transfer Protocol (PTP) allows researchers to inject ransomware in Canon’s DSLR camera CISA INSIGHTS and best practices to prevent ransomware The CISA, as a part of their first “CISA INSIGHTS” product, has put down three simple steps or recommendations organizations can take to manage their cybersecurity risk. CISA advises users to take necessary precautionary steps such as backing up the entire system offline, keeping the system updated and patched, update security solutions, and much more. If users have been affected by ransomware, they should contact the CISA or FBI immediately, work with an experienced advisor to help recover from the attack, isolate the infected systems and phase your return to operations, etc. Further, the CISA also tells users to practice good cyber hygiene, i.e. backup, update, whitelist apps, limit privilege, and using multi-factor authentication. Users should also develop containment strategies that will make it difficult for bad actors to extract information. Users should also review disaster recovery procedures and validate goals with executives, and much more. The CISA team has suggested certain best practices which the organizations should employ to stay safe from a ransomware attack. These include, users should restrict permissions to install and run software applications, and apply the principle of “least privilege” to all systems and services thus, limiting ransomware to spread further. The organization should also ensure using application whitelisting to allow only approved programs to run on a network. All firewalls should be configured to block access to known malicious IP addresses. Organizations should also enable strong spam filters to prevent phishing emails from reaching the end users and authenticate inbound emails to prevent email spoofing. A measure to scan all incoming and outgoing emails to detect threats and filter executable files from reaching end-users should be initiated. Read the entire CISA INSIGHTS to know more about the various ransomware outbreak strategies in detail. Also Read: ‘City Power Johannesburg’ hit by a ransomware attack that encrypted all its databases, applications and network FireEye report on Ransomware Protection and Containment strategies As a third resource, the CISA shared a FireEye report titled “Ransomware Protection and Containment Strategies: Practical Guidance for Endpoint Protection, Hardening, and Containment”. In this whitepaper, FireEye discusses different steps organizations can proactively take to harden their environment to prevent the downstream impact of a ransomware event. These recommendations can also help organizations with prioritizing the most important steps required to contain and minimize the impact of a ransomware event after it occurs. The FireEye report points out that any ransomware can be deployed across an environment in two ways. First, by Manual propagation by a threat actor after they have penetrated an environment and have administrator-level privileges broadly across the Environment to manually run encryptors on the targeted system through Windows batch files, Microsoft Group Policy Objects, and existing software deployment tools used by the victim’s organization. Second, by Automated propagation where the credential or Windows token is extracted directly from disk or memory to build trust relationships between systems through Windows Management Instrumentation, SMB, or PsExec. This binds systems and executes payloads. Hackers also automate brute-force attacks on unpatched exploitation methods, such as BlueKeep and EternalBlue. “While the scope of recommendations contained within this document is not all-encompassing, they represent the most practical controls for endpoint containment and protection from a ransomware outbreak,” FireEye researchers wrote. To combat these two deployment techniques, the FireEye researchers have suggested two enforcement measures which can limit the capability for a ransomware or malware variant to impact a large scope of systems within an environment. The FireEye report covers several technical recommendations to help organizations mitigate the risk of and contain ransomware events some of which include: RDP Hardening Remote Desktop Protocol (RDP) is a common method used by malicious actors to remotely connect to systems, laterally move from the perimeter onto a larger scope of systems for deploying malware. Organizations should also scan their public IP address ranges to identify systems with RDP (TCP/3389) and other protocols (SMB – TCP/445) open to the Internet in a proactive manner. RDP and SMB should not be directly exposed to ingress and egress access to/from the Internet. Other measures that organizations can take include: Enforcing Multi-Factor Authentication Organizations can either integrate a third-party multi-factor authentication technology or leverage a Remote Desktop Gateway and Azure Multi-Factor Authentication Server using RADIUS. Leveraging Network Level Authentication (NLA) Network Level Authentication (NLA) provides an extra layer of pre-authentication before a connection is established. It is also useful for protecting against brute force attacks, which mostly target open internet-facing RDP servers. Reducing the exposure of privileged and service accounts For ransomware deployment throughout an environment, both privileged and service accounts credentials are commonly utilized for lateral movement and mass propagation. Without a thorough investigation, it may be difficult to determine the specific credentials that are being utilized by a ransomware variant for connectivity within an environment. Privileged account and service account logon restrictions For accounts having privileged access throughout an environment, these should not be used on standard workstations and laptops, but rather from designated systems (e.g., Privileged Access Workstations (PAWS)) that reside in restricted and protected VLANs and Tiers. Explicit privileged accounts should be defined for each Tier, and only utilized within the designated Tier. The recommendations for restricting the scope of access for privileged accounts is based upon Microsoft’s guidance for securing privileged access. As a quick containment measure, consider blocking any accounts with privileged access from being able to login (remotely or locally) to standard workstations, laptops, and common access servers (e.g., virtualized desktop infrastructure). If a service account is only required to be leveraged on a single endpoint to run a specific service, the service account can be further restricted to only permit the account’s usage on a predefined listing of endpoints. Protected Users Security Group With the “Protected Users” security group for privileged accounts, an organization can minimize various risk factors and common exploitation methods for exposing privileged accounts on endpoints. Starting from Microsoft Windows 8.1 and Microsoft Windows Server 2012 R2 (and above), the “Protected Users” security group was introduced to manage credential exposure within an environment. Members of this group automatically have specific protections applied to their accounts, including: The Kerberos ticket granting ticket (TGT) expires after 4 hours, rather than the normal 10-hour default setting.  No NTLM hash for an account is stored in LSASS since only Kerberos authentication is used (NTLM authentication is disabled for an account).  Cached credentials are blocked. A Domain Controller must be available to authenticate the account. WDigest authentication is disabled for an account, regardless of an endpoint’s applied policy settings. DES and RC4 can’t be used for Kerberos pre-authentication (Server 2012 R2 or higher); rather Kerberos with AES encryption will be enforced. Accounts cannot be used for either constrained or unconstrained delegation (equivalent to enforcing the “Account is sensitive and cannot be delegated” setting in Active Directory Users and Computers). Cleartext password protections Organizations should also try minimizing the exposure of credentials and tokens in memory on endpoints. On older Windows Operating Systems, cleartext passwords are stored in memory (LSASS) to primarily support WDigest authentication. The WDigest should be explicitly disabled on all Windows endpoints where it is not disabled by default. WDigest authentication is disabled in Windows 8.1+ and in Windows Server 2012 R2+, by default. Starting from Windows 7 and Windows Server 2008 R2, after installing Microsoft Security Advisory KB2871997, WDigest authentication can be configured either by modifying the registry or by using the “Microsoft Security Guide” Group Policy template from the Microsoft Security Compliance Toolkit. To implement these and other ransomware protection and containment strategies, read the FireEye report. Other interesting news in Cybersecurity Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries Exim patches a major security bug found in all versions that left millions of Exim servers vulnerable to security attacks CircleCI reports of a security breach and malicious database in a third-party vendor account
Read more
  • 0
  • 0
  • 26177

article-image-different-types-of-nosql-databases-and-when-to-use-them
Richard Gall
10 Sep 2019
8 min read
Save for later

Different types of NoSQL databases and when to use them

Richard Gall
10 Sep 2019
8 min read
Why NoSQL databases? The popularity of NoSQL databases over the last decade or so has been driven by an explosion of data. Before what’s commonly described as ‘the big data revolution’, relational databases were the norm - these are databases that contain structured data. Structured data can only be structured if it is based on an existing schema that defines the relationships (hence relational) between the data inside the database. However, with the vast quantities of data that are now available to just about every business with an internet connection, relational databases simply aren’t equipped to handle the complexity and scale of large datasets. Why not SQL databases? This is for a couple of reasons. The defined schemas that are a necessary component of every relational database will not only undermine the richness and integrity of the data you’re working with, relational databases are also hard to scale. Relational databases can only scale vertically, not horizontally. That’s fine to a certain extent, but when you start getting high volumes of data - such as when millions of people use a web application, for example - things get really slow and you need more processing power. You can do this by upgrading your hardware, but that isn’t really sustainable. By scaling out, as you can with NoSQL databases, you can use a distributed network of computers to handle data. That gives you more speed and more flexibility. This isn’t to say that relational and SQL databases have had their day. They still fulfil many use cases. The only difference is that NoSQL can offers a level of far greater power and control for data intensive use cases. Indeed, using a NoSQL database when SQL will do is only going to add more complexity to something that just doesn’t need it. Seven NoSQL Databases in a Week Different types of NoSQL databases and when to use them So, now we’ve looked at why NoSQL databases have grown in popularity in recent years, lets dig into some of the different options available. There are a huge number of NoSQL databases out there - some of them open source, some premium products - many of them built for very different purposes. Broadly speaking there are 4 different models of NoSQL databases: Key-Value pair-based databases Column-based databases Document-oriented databases Graph databases Let’s take a look at these four models, how they’re different from one another, and some examples of the product options in each. Key Value pair-based NoSQL database management systems Key/Value pair based NoSQL databases store data in, as you might expect, pairs of keys and values. Data is stored with a matching key - keys have no relation or structure (so, keys could be height, age, hair color, for example). When should you use a key/value pair-based NoSQL DBMS? Key/value pair based NoSQL databases are the most basic type of NoSQL database. They’re useful for storing fairly basic information, like details about a customer. Which key/value pair-based DBMS should you use? There are a number of different key/value pair databases. The most popular is Redis. Redis is incredibly fast and very flexible in terms of the languages and tools it can be used with. It can be used for a wide variety of purposes - one of the reasons high-profile organizations use it, including Verizon, Atlassian, and Samsung. It’s also open source with enterprise options available for users with significant requirements. Redis 4.x Cookbook Other than Redis, other options include Memcached and Ehcache. As well as those, there are a number of other multi-model options (which will crop up later, no doubt) such as Amazon DynamoDB, Microsoft’s Cosmos DB, and OrientDB. Hands-On Amazon DynamoDB for Developers [Video] RDS PostgreSQL and DynamoDB CRUD: AWS with Python and Boto3 [Video] Column-based NoSQL database management systems Column-based databases separate data into discrete columns. Instead of using rows - whereby the row ID is the main key - column-based database systems flip things around to make the data the main key. By using columns you can gain much greater speed when querying data. Although it’s true that querying a whole row of data would take longer in a column-based DBMS, the use cases for column based databases mean you probably won’t be doing this. Instead you’ll be querying a specific part of the data rather than the whole row. When should you use a column-based NoSQL DBMS? Column-based systems are most appropriate for big data and instances where data is relatively simple and consistent (they don’t particularly handle volatility that well). Which column-based NoSQL DBMS should you use? The most popular column-based DBMS is Cassandra. The software prizes itself on its performance, boasting 100% availability thanks to lacking a single point of failure, and offering impressive scalability at a good price. Cassandra’s popularity speaks for itself - Cassandra is used by 40% of the Fortune 100. Mastering Apache Cassandra 3.x - Third Edition Learn Apache Cassandra in Just 2 Hours [Video] There are other options available, such as HBase and Cosmos DB. HBase High Performance Cookbook Document-oriented NoSQL database management systems Document-oriented NoSQL systems are very similar to key/value pair database management systems. The only difference is that the value that is paired with a key is stored as a document. Each document is self-contained, which means no schema is required - giving a significant degree of flexibility over the data you have. For software developers, this is essential - it’s for this reason that document-oriented databases such as MongoDB and CouchDB are useful components of the full-stack development tool chain. Some search platforms such as ElasticSearch use mechanisms similar to standard document-oriented systems - so they could be considered part of the same family of database management systems. When should you use a document-oriented DBMS? Document-oriented databases can help power many different types of websites and applications - from stores to content systems. However, the flexibility of document-oriented systems means they are not built for complex queries. Which document-oriented DBMS should you use? The leader in this space is, MongoDB. With an amazing 40 million downloads (and apparently 30,000 more every single day), it’s clear that MongoDB is a cornerstone of the NoSQL database revolution. MongoDB 4 Quick Start Guide MongoDB Administrator's Guide MongoDB Cookbook - Second Edition There are other options as well as MongoDB - these include CouchDB, CouchBase, DynamoDB and Cosmos DB. Learning Azure Cosmos DB Guide to NoSQL with Azure Cosmos DB Graph-based NoSQL database management systems The final type of NoSQL database is graph-based. The notable distinction about graph-based NoSQL databases is that they contain the relationships between different data. Subsequently, graph databases look quite different to any of the other databases above - they store data as nodes, with the ‘edges’ of the nodes describing their relationship to other nodes. Graph databases, compared to relational databases, are multidimensional in nature. They display not just basic relationships between tables and data, but more complex and multifaceted ones. When should you use a graph database? Because graph databases contain the relationships between a set of data (customers, products, price etc.) they can be used to build and model networks. This makes graph databases extremely useful for applications ranging from fraud detection to smart homes to search. Which graph database should you use? The world’s most popular graph database is Neo4j. It’s purpose built for data sets that contain strong relationships and connections. Widely used in the industry in companies such as eBay and Walmart, it has established its reputation as one of the world’s best NoSQL database products. Back in 2015 Packt’s Data Scientist demonstrated how he used Neo4j to build a graph application. Read more. Learning Neo4j 3.x [Video] Exploring Graph Algorithms with Neo4j [Video] NoSQL databases are the future - but know when to use the right one for the job Although NoSQL databases will remain a fixture in the engineering world, SQL databases will always be around. This is an important point - when it comes to databases, using the right tool for the job is essential. It’s a valuable exercise to explore a range of options and get to know how they work - sometimes the difference might just be a personal preference about usability. And that’s fine - you need to be productive after all. But what’s ultimately most essential is having a clear sense of what you’re trying to accomplish, and choosing the database based on your fundamental needs.
Read more
  • 0
  • 0
  • 63498

article-image-is-scala-3-0-a-new-language-all-together-martin-odersky-its-designer-says-yes-and-no
Bhagyashree R
10 Sep 2019
6 min read
Save for later

Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”

Bhagyashree R
10 Sep 2019
6 min read
At Scala Days Lausanne 2019 in July, Martin Odersky, the lead designer of Scala, gave a tour of the upcoming major version, Scala 3.0. He talked about the roadmap to Scala 3.0, its new features, how its situation is different from Python 2 vs 3, and much more. Roadmap to Scala 3.0 Odersky announced that “Scala 3.0 has almost arrived” since all the features are fleshed out, with implementations in the latest releases of Dotty, the next generation compiler for Scala. The team plans to go into feature freeze and release Scala 3.0 M1 in fall this year. Following that the team will focus on stabilization, complete SIP process, and write specs and user docs. They will also work on community build, compatibility, and migration tasks. All these tasks will take about a year, so we can expect Scala 3.0 release in fall 2020. Scala 2.13 was released in June this year. It was shipped with redesigned collections, updated futures implementation, language changes including literal types, partial unification on by default, by-name implicits, macro annotations, among others. The team is also working simultaneously on its next release, Scala 2.14. Its main focus will be to ease out the migration process from Scala 2 to 3 by defining migration tools, shim libraries, targeted deprecations, and more. What’s new in this major release There is a whole lot of improvements coming in Scala 3.0, some of which Odersky discussed in his talk: Scala will drop the ‘new’ keyword: Starting with Scala 3.0, you will be able to omit ‘new’ from almost all instance creations. With this change, developers will no longer have to define a case class just to get nice constructor calls. Also, this will prevent accidental infinite loops in the cases when ‘apply’ has the same arguments as the constructor. Top-level definitions: In Scala 3.0, top-level definitions will be added as a replacement for package objects. This is because only one package object definition is allowed per package. Also, a trait or class defined in the package object is different from the one defined in the package, which can lead to unexpected behavior. Redesigned enumeration support: Previously, Scala did not provide a very straightforward way to define enums. With this update, developers will have a simple way to define new types with a finite number of values or constructions. They will also be able to add parameters and define fields and methods. Union types: In previous versions, union types can be defined with the help of constructs such as Either or subtyping hierarchies, but these constructs are bulkier. Adding union types to the language will fix Scala’s least upper bounds problem and provide added modelling power. Extension methods: With extension methods, you can define methods that can be used infix without any boilerplate. These will essentially replace implicit classes. Delegates: Implicit is a “bedrock of programming” in Scala. However, they suffer from several limitations. Odersky calls implicit conversions “recipe for disaster” because they tend to interact very badly with each other and add too much implicitness. Delegates will be their simpler and safer alternative. Functions everywhere: In Scala, functions and methods are two different things. While methods are members of classes and objects, functions are objects themselves. Until now, methods were quite powerful as compared to functions. They are defined by properties like dependent, polymorphic, and implicit. With Scala 3.0, these properties will be associated with functions as well. Recent discussions regarding the updates in Scala 3.0 A minimal alternative for scala-reflect and TypeTag Scala 3.0 will drop support for ‘scala-reflect’ and ‘TypeTag’. Also, there hasn't been much discussion about its alternative. However, some developers believe that it is an important feature and is currently in use by many projects including Spark and doobie. Explaining the reason behind dropping the support, a SIP committee member, wrote on the discussion forum, “The goal in Scala 3 is what we had before scala-reflect. For use-cases where you only need an 80% solution, you should be able to accomplish that with straight-up Java reflection. If you need more, TASTY can provide you the basics. However, we don’t think a 100% solution is something folks need, and it’s unclear if there should be a “core” implementation that is not 100%.” Odersky shared that Scala 3.0 has quoted.Type as an alternative to TypeTag. He commented, “Scala 3 has the quoted package, with quoted.Expr as a representation of expressions and quoted.Type as a representation of types. quoted.Type essentially replaces TypeTag. It does not have the same API but has similar functionality. It should be easier to use since it integrates well with quoted terms and pattern matching.” Follow the discussion on Scala Contributors. Python-style indentation (for now experimental) Last month, Odersky proposed to bring indentation based syntax in Scala while also supporting the brace-based. This is because when it was first created, most of the languages used braces. However, with time indentation-based syntax has actually become the conventional syntax. Listing the reasons behind this change, Odersky wrote, The most widely taught language is now (or will be soon, in any case) Python, which is indentation based. Other popular functional languages are also indentation based (e..g Haskell, F#, Elm, Agda, Idris). Documentation and configuration files have shifted from HTML and XML to markdown and yaml, which are both indentation based. So by now indentation is very natural, even obvious, to developers. There's a chance that anything else will increasingly be considered "crufty" Odersky on whether Scala 3.0 is a new language Odersky answers this with both yes and no. Yes, because Scala 3.0 will include several language changes including feature removals. The introduced new constructs will improve user experience and on-boarding dramatically. There will also be a need to rewrite current Scala books to reflect the recent developments. No, because it will still be Scala and all core constructs will still be the same. He concludes, "Between yes and no, I think the fairest answer is to say it is really a process. Scala 3 keeps most constructs of Scala 2.13, alongside the new ones. Some constructs like old implicits and so on will be phased out in the 3.x release train. So, that requires some temporary duplication in the language, but the end result should be a more compact and regular language.” Comparing with Python 2 and 3, Odersky believes that Scala's situation is better because of static typing and binary compatibility. The current version of Dotty can be linked with Scala 2.12 or 2.13 files. He shared that in the future, it will be possible to have a Dotty library module that can then be used by both Scala 2 and 3 modules. Read also: Core Python team confirms sunsetting Python 2 on January 1, 2020 Watch Odersky’s talk to know more in detail. https://www.youtube.com/watch?v=_Rnrx2lo9cw&list=PLLMLOC3WM2r460iOm_Hx1lk6NkZb8Pj6A Other news in programming Implementing memory management with Golang’s garbage collector Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?  
Read more
  • 0
  • 0
  • 32679
article-image-unreal-engine-4-23-releases-with-major-new-features-like-chaos-virtual-production-improvement-in-real-time-ray-tracing-and-more
Vincy Davis
09 Sep 2019
5 min read
Save for later

Unreal Engine 4.23 releases with major new features like Chaos, Virtual Production, improvement in real-time ray tracing and more

Vincy Davis
09 Sep 2019
5 min read
Last week, Epic released the stable version of Unreal Engine 4.23 with a whopping 192 improvements. The major features include beta varieties like Chaos - Destruction, Multi-Bounce Reflection fallback in Real-Time Ray Tracing, Virtual Texturing, Unreal Insights, HoloLens 2 native support, Niagara improvements and many more. Unreal Engine 4.23 will no longer support iOS 10, as iOS 11 is now the minimum required version. What’s new in Unreal Engine 4.23? Chaos - Destruction Labelled as “Unreal Engine's new high-performance physics and destruction system” Chaos is available in beta for users to attain cinematic-quality visuals in real-time scenes. It also supports high level artist control over content creation and destruction. https://youtu.be/fnuWG2I2QCY Chaos supports many distinct characteristics like- Geometry Collections: It is a new type of asset in Unreal for short-lived objects. The Geometry assets can be built using one or more Static Meshes. It offers flexibility to the artist on choosing what to simulate, how to organize and author the destruction. Fracturing: A Geometry Collection can be broken into pieces either individually, or by applying one pattern across multiple pieces using the Fracturing tools. Clustering: Sub-fracturing is used by artists to increase optimization. Every sub-fracture is an extra level added to the Geometry Collection. The Chaos system keeps track of the extra levels and stores the information in a Cluster, to be controlled by the artist. Fields: It can be used to control simulation and other attributes of the Geometry Collection. Fields enable users to vary the mass, make something static, to make the corner more breakable than the middle, and others. Unreal Insights Currently in beta, Unreal Insights enable developers to collect and analyze data about Unreal Engine's behavior in a fixed way. The Trace System API system is one of its components and is used to collect information from runtime systems consistently. Another component of Unreal Insights is called the Unreal Insights Tool. It supplies interactive visualization of data through the Analysis API. For in-depth details about Unreal Insights and other features, you can also check out the first preview release of Unreal Engine 4.23. Virtual Production Pipeline Improvements Unreal Engine 4.23 explores advancements in virtual production pipeline by improving virtually scout environments and compose shots by connecting live broadcast elements with digital representations and more. In-Camera VFX: With improvements in-Camera VFX, users can achieve final shots live on set by combining real-world actors and props with Unreal Engine environment backgrounds. VR Scouting for Filmmakers: The new VR Scouting tools can be used by filmmakers to navigate and interact with the virtual world in VR. Controllers and settings can also be customized in Blueprints,rather than rebuilding the engine in C++. Live Link Datatypes and UX Improvements: The Live Link Plugin be used to drive character animation, camera, lights, and basic 3D transforms dynamically from other applications and data sources in the production pipeline. Other improvements include save and load presets for Live Link setups, better status indicators to show the current Live Link sources, and more. Remote Control over HTTP: Unreal Engine 4.23 users can send commands to Unreal Engine and Unreal Editor remotely over HTTP. This makes it possible for users to create customized web user interfaces to trigger changes in the project's content. Read Also: Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Real-Time Ray tracing Improvements Performance and Stability Expanded DirectX 12 Support Improved Denoiser quality Increased Ray Traced Global Illumination (RTGI) quality Additional Geometry and Material Support Landscape Terrain Hierarchical Instanced Static Meshes (HISM) and Instanced Static Meshes (ISM) Procedural Meshes Transmission with SubSurface Materials World Position Offset (WPO) support for Landscape and Skeletal Mesh geometries Multi-Bounce Reflection Fallback Unreal Engine 4.23 provides improved support for multi-bounce Ray Traced Reflections (RTR) by using Reflection Captures. This will increase the performance of all types of intra-reflections. Virtual Texturing The beta version of Virtual Texturing in Unreal Engine 4.23 enables users to create and use large textures for a lower and more constant memory footprint at runtime. Streaming Virtual Texturing: The Streaming Virtual Texturing uses the Virtual Texture assets to present an option to stream textures from disk rather than the existing Mip-based streaming. It minimizes the texture memory overhead and increases performance when using very large textures. Runtime Virtual Texturing: The Runtime Virtual Texturing avails a Runtime Virtual Texture asset. It can be used to supply shading data over large areas, thus making it suitable for Landscape shading. Unreal Engine 4.23 also presents new features like Skin Weight Profiles, Animation Streaming, Dynamic Animation Graphs, Open Sound Control, Sequencer Curve Editor Improvements, and more. As expected, users love the new features in Unreal Engine 4.23, especially Chaos. https://twitter.com/rista__m/status/1170608746692673537 https://twitter.com/jayakri59101140/status/1169553133518782464 https://twitter.com/NoisestormMusic/status/1169303013149806595 To know about the full updates in Unreal Engine 4.23, users can head over to the Unreal Engine blog. Other news in Game Development Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 36361

article-image-5-pitfalls-of-react-hooks-you-should-avoid-kent-c-dodds
Sugandha Lahoti
09 Sep 2019
7 min read
Save for later

5 pitfalls of React Hooks you should avoid - Kent C. Dodds

Sugandha Lahoti
09 Sep 2019
7 min read
The React community first introduced Hooks, back in October 2018 as a JavaScript function to allow using React without classes. The idea was simple - With the help of Hooks, you will be able to “hook into” or use React state and other React features from function components. In February, React 16.8 released with the stable implementation of Hooks. As much as Hooks are popular, there are certain pitfalls which developers should avoid when they are learning and adopting React Hooks. In his talk, “React Hook Pitfalls” at React Rally 2019 (August 22-23 2019), Kent C. Dodds talks about 5 common pitfalls of React Hooks and how to avoid/fix them. Kent is a world renowned speaker, maintainer and contributor of hundreds of popular npm packages. He's actively involved in the open source community of React and general JavaScript ecosystem. He’s also the creator of react-testing-library which provides simple and complete React DOM testing utilities that encourage good testing practices. Tl;dr Problem: Starting without a good foundation Solution: Read the React Hooks docs and the FAQ Problem: Not using (or ignoring) the ESLint plugin Solution: Install, use, and follow the ESLint plugin Problem: Thinking in Lifecycles Solution: Don't think about Lifecycles, think about synchronizing side effects to state Problem: Overthinking performance Solution: React is fast by default and so research before applying performance optimizations pre-maturely Problem: Overthinking the testing of React hooks Solution: Avoid testing ‘implementation details’ of the component. Pitfall #1 Starting without a good foundation Often React developers begin coding without reading the documentation and that leads to a number of issues and small problems. Kent recommends developers to start by reading the React Hooks documentation and the FAQ section thoroughly. He jokingly adds, “Once you read the frequently asked questions, you can ask the infrequently asked questions. And then maybe those will get in the docs, too. In fact, you can make a pull request and put it in yourself.” Pitfall #2: Not using or (ignoring) the ESLint plugin The ESLint plugin is the official plugin built by the React team. It has two rules: "rules of hooks" and "exhaustive deps." The default recommended configuration of these rules is to set "rules of hooks" to an error, and the "exhaustive deps" to a warning. The linter plugin enforces these rules automatically. The two “Rules of Hooks” are: Don’t call Hooks inside loops, conditions, or nested functions Instead, always use Hooks at the top level of your React function. By following this rule, you ensure that Hooks are called in the same order each time a component renders. Only Call Hooks from React Functions Don’t call Hooks from regular JavaScript functions. Instead, you can either call Hooks from React function components or call them from custom Hooks. Kent agrees that sometimes the rule is incapable of performing static analysis on your code properly due to limitations of ESLint. “I believe”, he says, “ this is why it's recommended to set the exhaustive deps rule to "warn" instead of "error." When this happens, the plugin will tell you so in the warning. He recommends  developers should restructure their code to avoid that warning. The solution Kent offers for this pitfall is to Install, follow, and use the ESLint plugin. The ESLint plugin, he says will not only catch easily missable bugs, but it will also teach you things about your code and hooks in the process. Pitfall #3: Thinking in Lifecycles In Hooks the components are declarative. Kent says that this feature allows you to stop thinking about "when things should happen in the lifecycle of the component" (which doesn't matter that much) and more about "when things should happen in relation to state changes" (which matters much more.) With React Hooks, he adds, you're not thinking about component Lifecycles, instead you're thinking about synchronizing the state of the side-effects with the state of the application. This idea is difficult for React developers to grasp initially, however once you do it, he adds, you will naturally experience fewer bugs in your apps thanks to the design of the API. https://twitter.com/ryanflorence/status/1125041041063665666 Solution: Think about synchronizing side effects to state, rather than lifecycle methods. Pitfall #4: Overthinking performance Kent says that even though it's really important to be considerate of performance, you should also think about your code complexity. If your code is complex, you can't give people the great features they're looking for, as you will be spending all your time, dealing with the complexity of your code. He adds, "unnecessary re-renders" are not necessarily bad for performance. Just because a component re-renders, doesn't mean the DOM will get updated (updating the DOM can be slow). React does a great job at optimizing itself; it’s fast by default. For this, he mentions. “If your app's unnecessary re-renders are causing your app to be slow, first investigate why renders are slow. If rendering your app is so slow that a few extra re-renders produces a noticeable slow-down, then you'll likely still have performance problems when you hit "necessary re-renders." Once you fix what's making the render slow, you may find that unnecessary re-renders aren't causing problems for you anymore.” If still unnecessary re-renders are causing you performance problems, then you can unpack the built-in performance optimization APIs like React.memo, React.useMemo, and React.useCallback. More information on this on Kent’s blogpost on useMemo and useCallback. Solution: React is fast by default and so research before applying performance optimizations pre-maturely; profile your app and then optimize it. Pitfall #5: Overthinking the testing of React Hooks Kent says, that people are often concerned that they need to rewrite their tests along with all of their components when they refactor to hooks from class components. He explains, “Whether your component is implemented via Hooks or as a class, it is an implementation detail of the component. Therefore, if your test is written in such a way that reveals that, then refactoring your component to hooks will naturally cause your test to break.” He adds, “But the end-user doesn't care about whether your components are written with hooks or classes. They just care about being able to interact with what those components render to the screen. So if your tests interact with what's being rendered, then it doesn't matter how that stuff gets rendered to the screen, it'll all work whether you're using classes or hooks.” So, to avoid this pitfall, Kent’s recommendation is that you write tests that will work irrespective of whether you're using classes or hook. Before you upgrade to Hooks, start writing your tests free of implementation detail and your refactored hooks can be validated by the tests that you've written for your classes. The more your tests resemble the way your software is used, the more confidence they can give you. In review: Read the docs and the FAQ. Install, use and follow the ESLint plugin. Think about synchronizing side effects to state. Profile your app and then optimize it. Avoid testing implementation details. Watch the full talk on YouTube. https://www.youtube.com/watch?v=VIRcX2X7EUk Read more about React #Reactgate forces React leaders to confront community’s toxic culture head on React.js: why you should learn the front end JavaScript library and how to get started Ionic React RC is now out!
Read more
  • 0
  • 0
  • 41093

article-image-developers-from-the-swift-for-tensorflow-project-propose-adding-first-class-differentiable-programming-to-swift
Bhagyashree R
09 Sep 2019
5 min read
Save for later

Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift

Bhagyashree R
09 Sep 2019
5 min read
After working for over 1.5 years on the Differentiable Programming Mega-Proposal, Richard Wei, a developer at Google Brain, and his team submitted the proposal on the Swift Evolution forum on Thursday last week. This proposal aims to “push Swift's capabilities to the next level in numerics and machine learning” by introducing differentiable programming as a new language feature in Swift. It is a part of the Swift for TensorFlow project under which the team is integrating TensorFlow directly into the language to offer developers a next-generation platform for machine learning. What is differentiable programming With the increasing sophistication in deep learning models and the introduction of modern deep learning frameworks, many researchers have started to realize that building neural networks is very similar to programming. Yann LeCun, VP and Chief AI Scientist at Facebook, calls differentiable programming “a little more than a rebranding of the modern collection Deep Learning techniques, the same way Deep Learning was a rebranding of the modern incarnations of neural nets with more than two layers.” He compares it with regular programming, with the only difference that the resulting programs are “parameterized, automatically differentiated, and trainable/optimizable.” Many also say that differentiable programming is a different name for automatic differentiation, a collection of techniques to numerically evaluate the derivative of a function. It can be seen as a new programming paradigm in which programs can be differentiated throughout. Check out the paper “Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagation” to get a better understanding of differentiable programming. Why differential programming is proposed in Swift Swift is an expressive, high-performance language, which makes it a perfect candidate for numerical applications. According to the proposal authors, first-class support for differentiable programming in Swift will allow safe and powerful machine learning development. The authors also believe that this is a “big step towards high-level numerical computing support.” With this proposal, they aim to make Swift a “real contender in the numerical computing and machine learning landscape.” Here are some of the advantages of adding first-class support for differentiable programming in Swift: Better language coverage: First-class differentiable programming support will enable differentiation to work smoothly with other Swift features. This will allow developers to code normally without being restricted to a subset of Swift. Enable extensibility: This will provide developers an “extensible differentiable programming system.” They will be able to create custom differentiation APIs by leveraging primitive operators defined in the standard library and supported by the type system. Static warnings and errors: This will enable the compiler to statically identify the functions that cannot be differentiated or will give a zero derivative. It will then be able to give a non-differentiability error or warning. This will improve productivity by making common runtime errors in machine learning directly debuggable without library boundaries. Some of the components that will be added in Swift under this proposal are: The Differentiable protocol: This is a standard library protocol that will generalize all data structures that can be a parameter or result of a differentiable function. The @differentiable declaration attribute: This will be used to mark all the function-like declarations as differentiable. The @differentiable function types: This is a subtype of normal function types with a different runtime representation and calling convention. Differentiable function types will have differentiable parameters and results. Differential operators: These are the core differentiation APIs that take ‘@differentiable’ functions as inputs and return derivative functions or compute derivative values. @differentiating and @transposing attributes: These attributes are for declaring custom derivative function for some other function declaration. This proposal sparked a discussion on Hacker News. Many developers were excited about bringing differentiable programming support in the Swift core. A user commented, “This is actually huge. I saw a proof of concept of something like this in Haskell a few years back, but it's amazing it see it (probably) making it into the core of a mainstream language. This may let them capture a large chunk of the ML market from Python - and hopefully, greatly improve ML APIs while they're at it.” Some felt that a library could have served the purpose. “I don't see why a well-written library could not serve the same purpose. It seems like a lot of cruft. I doubt, for example, Python would ever consider adding this and it's the de facto language that would benefit the most from something like this - due to the existing tools and communities. It just seems so narrow and not at the same level of abstraction that languages typically sit at. I could see the language supporting higher-level functionality so a library could do this without a bunch of extra work (such as by some reflection),” a user added. Users also discussed another effort that goes along the lines of this project: Julia Zygote, which is a working prototype for source-to-source automatic differentiation. A user commented, “Yup, work is continuing apace with Julia’s next-gen Zygote project. Also, from the GP’s thought about applications beyond DL, my favorite examples so far are for model-based RL and Neural ODEs.” To know more in detail, check out the proposal: Differentiable Programming Mega-Proposal. Other news in programming Why Perl 6 is considering a name change? The Julia team shares its finalized release process with the community TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 34945
article-image-what-can-you-expect-at-neurips-2019
Sugandha Lahoti
06 Sep 2019
5 min read
Save for later

What can you expect at NeurIPS 2019?

Sugandha Lahoti
06 Sep 2019
5 min read
Popular machine learning conference NeurIPS 2019 (Conference on Neural Information Processing Systems) will be held on Sunday, December 8 through Saturday, December 14 at the Vancouver Convention Center. The conference invites papers tutorials, and submissions on cross-disciplinary research where machine learning methods are being used in other fields, as well as methods and ideas from other fields being applied to ML.  NeurIPS 2019 accepted papers Yesterday, the conference published the list of their accepted papers. A total of 1429 papers have been selected. Submissions opened from May 1 on a variety of topics such as Algorithms, Applications, Data implementations, Neuroscience, and Cognitive Science, Optimization, Probabilistic Methods, Reinforcement Learning and Planning, and Theory. (The full list of Subject Areas are available here.) This year at NeurIPS 2019, authors of accepted submissions were mandatorily required to prepare either a 3-minute video or a PDF of slides summarizing the paper or prepare a PDF of the poster used at the conference. This was done to make NeurIPS content accessible to those unable to attend the conference. NeurIPS 2019 also introduced a mandatory abstract submission deadline, a week before final submissions are due. Only a submission with a full abstract was allowed to have the full paper uploaded. The authors were also asked to answer questions from the Reproducibility Checklist during the submission process. NuerIPS 2019 tutorial program NeurIPS also invites experts to present tutorials that feature topics that are of interest to a sizable portion of the NeurIPS community and are different from the ones already presented at other ML conferences like ICML or ICLR. They looked for tutorial speakers that cover topics beyond their own research in a comprehensive manner that encompasses multiple perspectives.  The tutorial chairs for NeurIPS 2019 are Danielle Belgrave and Alice Oh. They initially compiled a list based on the last few years’ publications, workshops, and tutorials presented at NeurIPS and at related venues. They asked colleagues for recommendations and conducted independent research. In reviewing the potential candidates, the chair read papers to understand their expertise and watch their videos to appreciate their style of delivery. The list of candidates was emailed to the General Chair, Diversity & Inclusion Chairs, and the rest of the Organizing Committee for their comments on this shortlist. Following a few adjustments based on their input, the potential speakers were selected. A total of 9 tutorials have been selected for NeurIPS 2019: Deep Learning with Bayesian Principles - Emtiyaz Khan Efficient Processing of Deep Neural Network: from Algorithms to Hardware Architectures - Vivienne Sze Human Behavior Modeling with Machine Learning: Opportunities and Challenges - Nuria Oliver, Albert Ali Salah Interpretable Comparison of Distributions and Models - Wittawat Jitkrittum, Dougal Sutherland, Arthur Gretton Language Generation: Neural Modeling and Imitation Learning -  Kyunghyun Cho, Hal Daume III Machine Learning for Computational Biology and Health - Anna Goldenberg, Barbara Engelhardt Reinforcement Learning: Past, Present, and Future Perspectives - Katja Hofmann Representation Learning and Fairness - Moustapha Cisse, Sanmi Koyejo Synthetic Control - Alberto Abadie, Vishal Misra, Devavrat Shah NeurIPS 2019 Workshops NeurIPS Workshops are primarily used for discussion of work in progress and future directions. This time the number of Workshop Chairs doubled, from two to four; selected chairs are Jenn Wortman Vaughan, Marzyeh Ghassemi, Shakir Mohamed, and Bob Williamson. However, the number of workshop submissions went down from 140 in 2018 to 111 in 2019. Of these 111 submissions, 51 workshops were selected. The full list of selected Workshops is available here.  The NeurIPS 2019 chair committee introduced new guidelines, expectations, and selection criteria for the Workshops. This time workshops had an important focus on the nature of the problem, intellectual excitement of the topic, diversity, and inclusion, quality of proposed invited speakers, organizational experience and ability of the team and more.  The Workshop Program Committee consisted of 37 reviewers with each workshop proposal assigned to two reviewers. The reviewer committee included more senior researchers who have been involved with the NeurIPS community. Reviewers were asked to provide a summary and overall rating for each workshop, a detailed list of pros and cons, and specific ratings for each of the new criteria. After all reviews were submitted, each proposal was assigned to two of the four chair committee members. The chair members looked through assigned proposals and their reviews to form an educated assessment of the pros and cons of each. Finally, the entire chair held a meeting to discuss every submitted proposal to make decisions.  You can check more details about the conference on the NeurIPS website. As always keep checking this space for more content about the conference. In the meanwhile, you can read our previous year coverage: NeurIPS Invited Talk: Reproducible, Reusable, and Robust Reinforcement Learning NeurIPS 2018: How machine learning experts can work with policymakers to make good tech decisions [Invited Talk] NeurIPS 2018: Rethinking transparency and accountability in machine learning NeurIPS 2018: Developments in machine learning through the lens of Counterfactual Inference [Tutorial] Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk]
Read more
  • 0
  • 0
  • 26494

article-image-google-is-circumventing-gdpr-reveals-braves-investigation-for-the-authorized-buyers-ad-business-case
Bhagyashree R
06 Sep 2019
6 min read
Save for later

Google is circumventing GDPR, reveals Brave's investigation for the Authorized Buyers ad business case

Bhagyashree R
06 Sep 2019
6 min read
Last year, Dr. Johnny Ryan, the Chief Policy & Industry Relations Officer at Brave, filed a complaint against Google’s DoubleClick/Authorized Buyers ad business with the Irish Data Protection Commission (DPC). New evidence produced by Brave reveals that Google is circumventing GDPR and also undermining its own data protection measures. Brave calls Google’s Push Pages a GDPR workaround Brave’s new evidence rebuts some of Google’s claims regarding its DoubleClick/Authorized Buyers system, the world’s largest real-time advertising auction house. Google says that it prohibits companies that use its real-time bidding (RTB) ad system “from joining data they receive from the Cookie Matching Service.” In September last year, Google announced that it has removed encrypted cookie IDs and list names from bid requests with buyers in its Authorized Buyers marketplace. Brave’s research, however, found otherwise, “Brave’s new evidence reveals that Google allowed not only one additional party, but many, to match with Google identifiers. The evidence further reveals that Google allowed multiple parties to match their identifiers for the data subject with each other.” When you visit a website that has Google ads embedded on its web pages, Google will run a real-time bidding ad auction to determine which advertiser will get to display its ads. For this, it uses Push Pages, which is the mechanism in question here. Brave hired Zach Edwards, the co-founder of digital analytics startup Victory Medium, and MetaX, a company that audits data supply chains, to investigate and analyze a log of Dr. Ryan’s web browsing. The research revealed that Google's Push Pages can essentially be used as a workaround for user IDs. Google shares a ‘google_push’ identifier with the participating companies to identify a user. Brave says that the problem here is that the identifier that was shared was common to multiple companies. This means that these companies could have cross-referenced what they learned about the user from Google with each other. Used by more than 8.4 million websites, Google's DoubleClick/Authorized Buyers broadcasts personal data of users to 2000+ companies. This data includes the category of what a user is reading, which can reveal their political views, sexual orientation, religious beliefs, as well as their locations. There are also unique ID codes that are specific to a user that can let companies uniquely identify a user. All this information can give these companies a way to keep tabs on what users are “reading, watching, and listening to online.” Brave calls Google’s RTB data protection policies “weak” as they ask these companies to self-regulate. Google does not have much control over what these companies do with the data once broadcast. “Its policy requires only that the thousands of companies that Google shares peoples’ sensitive data with monitor their own compliance, and judge for themselves what they should do,” Brave wrote. A Google spokesperson, as a response to this news, told Forbes, “We do not serve personalised ads or send bid requests to bidders without user consent. The Irish DPC — as Google's lead DPA — and the UK ICO are already looking into real-time bidding in order to assess its compliance with GDPR. We welcome that work and are co-operating in full." Users recommend starting an “information campaign” instead of a penalty that will hardly affect the big tech This news triggered a discussion on Hacker News where users talked about the implications of RTB and what strict actions the EU can take to protect user privacy. A user explained, "So, let's say you're an online retailer, and you have Google IDs for your customers. You probably have some useful and sensitive customer information, like names, emails, addresses, and purchase histories. In order to better target your ads, you could participate in one of these exchanges, so that you can use the information you receive to suggest products that are as relevant as possible to each customer. To participate, you send all this sensitive information, along with a Google ID, and receive similar information from other retailers, online services, video games, banks, credit card providers, insurers, mortgage brokers, service providers, and more! And now you know what sort of vehicles your customers drive, how much they make, whether they're married, how many kids they have, which websites they browse, etc. So useful! And not only do you get all these juicy private details, but you've also shared your customers sensitive purchase history with anyone else who is connected to the exchange." Others said that a penalty is not going to deter Google. "The whole penalty system is quite silly. The fines destroy small companies who are the ones struggling to comply, and do little more than offer extremely gentle pokes on the wrist for megacorps that have relatively unlimited resources available for complete compliance, if they actually wanted to comply." Users suggested that the EU should instead start an information campaign. "EU should ignore the fines this time and start an "information campaign" regarding behavior of Google and others. I bet that hurts Google 10 times more." Some also said that not just Google but the RTB participants should also be held responsible. "Because what Google is doing is not dissimilar to how any other RTB participant is acting, saying this is a Google workaround seems disingenuous." With this case, Brave has launched a full-fledged campaign that aims to “reform the multi-billion dollar RTB industry spans sixteen EU countries.” To achieve this goal it has collaborated with several privacy NGOs and academics including the Open Rights Group, Dr. Michael Veale of the Turing Institute, among others. In other news, a Bloomberg report reveals that Google and other internet companies have recently asked for an amendment to the California Consumer Privacy Act, which will be enacted in 2020. The law currently limits how digital advertising companies collect and make money from user data. The amendments proposed include approval for collecting user data for targeted advertising, using the collected data from websites for their own analysis, and many others. Read the Bloomberg report to know more in detail. Other news in Data Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge GDPR complaint in EU claim billions of personal data leaked via online advertising bids European Union fined Google 1.49 billion euros for antitrust violations in online advertising  
Read more
  • 0
  • 0
  • 44217
Modal Close icon
Modal Close icon