Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-mozillas-new-firefox-dns-security-updates-spark-privacy-hue-and-cry
Melisha Dsouza
07 Aug 2018
4 min read
Save for later

Mozilla's new Firefox DNS security updates spark privacy hue and cry

Melisha Dsouza
07 Aug 2018
4 min read
Mozilla just upped its security game by introducing two new features to their Firefox browser that they call "DNS over HTTPs" (DoH) and "Trusted Recursive Resolver" (TRR). According to Mozilla, this is an attempt on their part to enhance security. They want to make one of the oldest parts of the internet architecture- the DNS- more private and safe. This will be done by encrypting DNS queries and by testing a service that keeps DNS providers from collecting and sharing users browsing history. But internet security geeks far from agree to this claim made by Mozilla. DoH and TRR explained A DNS converts a computer’s domain name into an IP address. This means that when you enter the domain of a particular website in your browser, a request is automatically sent to the DNS server that you have configured. The DNS server then looks up this domain name and returns an IP address for your browser to connect to. However, this DNS traffic is unencrypted and shared with multiple parties, making data vulnerable to capture and spy on. Enter Mozilla with two new updates to save the day. The DNS over HTTPS (DoH) protocol encrypts DNS requests and responses.DNS requests sent to the DoH cloud server are encrypted while old style DNS requests are not protected. The next thing up Mozilla’s alley is building a default configuration for DoH servers that puts privacy first- also known as the  Trusted Recursive Resolver (TRR). With Trusted Recursive Resolver (TRR) turned on as default, any DNS changes that a Firefox user configured in the network will be overridden. Mozilla has partnered up with Cloudflare after agreeing to a very strong privacy policy that protects users data. Why security Geeks don’t prefer Mozilla’s DNS updates? Even though Mozilla has made an attempt to transport requests over https- thus encrypting the data- the main concern was that the DNS servers used are local and hence the parties that spy on you will, well, also be local! Adding to this, while browsing with Firefox, Cloudflare will can read everyone's DNS requests. This is because Mozilla has partnered up with Cloudflare, and will resolve the domain names from the application itself via a DNS server from Cloudflare based in the United States. Now this itself poses as a threat since Cloudflare is a third party bearer and we all know the consequences of having a third party interfere with our data and network. Despite the assurance that Cloudflare has signed a “pro-user privacy policy” that deletes all personally identifiable data within 24 hours, you can never say what will be done with your data. After the Cambridge analytica scandal- nothing virtual can be trusted. Here’s a small overview of what can go wrong because of the TRR. TRR  fully disables anonymity. Before Mozilla implemented this change, the DNS resolution was local and could be attacked. However, with Mozilla's change, all DNS requests are seen by Cloudflare and in turn also by any government agency that has legal right to request data from Cloudflare. So in short, any (US) government agency can basically trace you down if you have information to spill or benefit them. So to save everyone the trouble, let's explore what you can do with the situation. It's simple- turn TRR off! Hackernews users suggest the following workaround: Enter about:config in the address bar Search for network.trr Set network.trr.mode = 5 to completely disable it If you want to explore more about mode 5, head over to mozilla.org. You can Change network.trr.mode to 2 to enable DoH. This will try and use. DoH but will fallback to insecure DNS under some circumstances like captive portals.  (Use mode 5 to disable DoH under all circumstances.) The other modes are described on usejournal.com You may be surprised at how such a simple update can fuel so much discussion. It all comes down to the pitfalls of blind trusting a third party service or being your own boss and switching the TRR off. Whose side are you on? To know more about this update, head over to Mozilla's Blog. Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature Mozilla is building a bridge between Rust and JavaScript Firefox has made a password manager for your iPhone    
Read more
  • 0
  • 0
  • 15427

article-image-julia-v1-2-releases-with-support-for-argument-splatting-unicode-12-new-star-unary-operator-and-more
Vincy Davis
21 Aug 2019
3 min read
Save for later

Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more

Vincy Davis
21 Aug 2019
3 min read
Yesterday, the team behind Julia announced the release of Julia v1.2. It is the second minor release in the 1.x series and has new features such as argument splatting, support for Unicode 12 and a new ⋆ (star) unary operator. Julia v1.2 also has many performance improvements with marginal and undisruptive changes. The post states that Julia v1.2 will not have a long term support and “As of this release, 1.1 has been effectively superseded by 1.2, which means there will not likely be any further 1.1.x releases. Our good friend 1.0 is still currently the only long-term support version.” What’s new in Julia v1.2 This version supports Argument splatting (x...). It can be used in calls to the new pseudo-function in constructors. Support for Unicode 12 has been added. A new unary operator ⋆ (star) has been added. New library functions A new argument !=(x), >(x), >=(x), <(x), <=(x) has been added to assist in returning the partially-applied versions of the functions A new getipaddrs() function is added to return all the IP addresses of the local machine with the IPv4 addresses New library function Base.hasproperty and Base.hasfield  Other improvements in Julia v1.2 Multi-threading changes It will now be possible to schedule and switch tasks during @threads loops, and perform limited I/O. A new thread-safe replacement has been added to the Condition type. It can now be accessed as Threads.Condition. Standard library changes The extrema function now accepts a function argument in the same way like minimum and maximum. The hasmethod method can now check for matching keyword argument names. The mapreduce function will accept multiple iterators. Functions that invoke commands like run(::Cmd), will get a ProcessFailedException rather than an ErrorException. A new no-argument constructor for Ptr{T} has been added to construct a null pointer. Jeff Bezanson, Julia co-creator says, “If you maintain any packages, this is a good time to add CI for 1.2, check compatibility, and tag new versions as needed.” Users are happy with the Julia v1.2 release and are all praises for the Julia language. A user on Hacker News comments, “Julia has very well thought syntax and runtime I hope to see it succeed in the server-side web development area.” Another user says, “I’ve recently switched to Julia for all my side projects and I’m loving it so far! For me the killer feature is the seamless GPUs integration.” For more information on Julia v1.2, head over to its release notes. Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment
Read more
  • 0
  • 0
  • 15421

article-image-istio-1-3-releases-with-traffic-management-improved-security-and-more
Amrata Joshi
16 Sep 2019
3 min read
Save for later

Istio 1.3 releases with traffic management, improved security, and more!

Amrata Joshi
16 Sep 2019
3 min read
Last week, the team behind Istio, an open-source service mesh platform, announced Istio 1.3. This release makes using the service mesh platform easier for users. What’s new in Istio 1.3? Traffic management In this release, automatic determination of HTTP or TCP has been added for outbound traffic when ports are not correctly named as per Istio’s conventions. The team has added a mode to the Gateway API that is used for mutual TLS operation. Envoy proxy has been improved,  it now checks Envoy’s readiness status. The team has improved the load balancing for directing the traffic to the same region and zone by default. And the Redis load balancer has now defaulted to MAGLEV while using the Redis proxy. Improved security This release comes with trust domain validation for services that use mutual TLS. By default, the server only authenticates the requests from the same trust domain. The team has added SDS (Software Defined Security) support for delivering the private key and certificates to each of the Istio control plane services. The team implemented major security policies including RBAC, directly into Envoy.  Experimental telemetry  In this release, the team has improved the Istio proxy to emit HTTP metrics directly to Prometheus, without the need of istio-telemetry service.  Handles inbound traffic securely Istio 1.3 secures and handles all inbound traffic on any port without the need of containerPort declarations. The team has eliminated the infinite loops that are caused in the IP tables rules when workload instances send traffic to themselves. Enhanced EnvoyFilter API The team has enhanced the EnvoyFilter API so that users can fully customize HTTP/TCP listeners, their filter chains returned by LDS (Listener discovery service ), Envoy HTTP route configuration that is returned by RDS (Route Discovery Service) and much more. Improved control plane monitoring The team has enhanced control plane monitoring by adding new metrics to monitor configuration state, metrics for sidecar injector and a new Grafana dashboard for Citadel. Users all over seem to be excited about this release.  https://twitter.com/HamzaZ21823474/status/1172235176438575105 https://twitter.com/vijaykodam/status/1172237003506798594 To know more about this news, check out the release notes. Other interesting news in Cloud & networking StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes Kong announces Kuma, an open-source project to overcome the limitations of first-generation service mesh technologies        
Read more
  • 0
  • 0
  • 15410

article-image-california-passes-the-u-s-first-iot-security-bill
Prasad Ramesh
25 Sep 2018
3 min read
Save for later

California passes the U.S.' first IoT security bill

Prasad Ramesh
25 Sep 2018
3 min read
California likes to be leading the way when it comes to digital regulation. Just a few weeks ago it passed legislation that looks like it could restore net neutrality. Now, a bill designed to tighten IoT security, is with the governor awaiting signature for it to be carried into California state law. The bill, SB-327 Information privacy: connected devices, was initially introduced in February 2017 by Senator Jackson. It was the first legislation of its kind in the US. Approved at the end of August, it will come into effect at the start of 2020 once signed by Governor Jerry Brown. Read next: IoT Forensics: Security in an always connected world where things talk What California’s IoT bill states The new IoT security bill covers another of important areas. For example, for manufacturers, IoT devices will need to contain certain safety and security features: Security should be appropriate to the nature and function of the device. The feature should be appropriate to the information an IoT may collect, contain, or transmit. It should be designed to protect the device and information within it from unauthorized access, destruction, use, modification, or disclosure. If an IoT device is requires authentication over the internet, further conditions need to be met, such as: The preset password must be unique to each device that is manufactured. The device must ask the user to generate a new authentication method before being able to use it for the first time. It’s worth noting that the points mentioned above for IoT security are not applicable to IoT devices that are subject to security requirements under federal law. Also a covered entity like a health care provider, business associate, contractor, or employer subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) or the Confidentiality of Medical Information Act is exempt from the title points mentioned. The IoT is a network of several of devices that connect to the internet via Wi-Fi. They are not openly visible as most of them are used in a local network but often do not have many security measures. The bill doesn't have any exact definitions for a ‘reasonable security feature’ but provides a few guiding points in interest a user’s security. The legislation states: “This bill, beginning on January 1, 2020, would require a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure, as specified.” Criticisms of the IoT bill Some cybersecurity experts have criticised the legislation. For example, Robert Graham writes on his Security Errarta blog that the bill is “based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.” He explains that “the point [of good cybersecurity practice] is not to add ‘security features’ but to remove ‘insecure features’.” Graham’s criticisms underline that while the legislation might be well-intentioned, whether it will be impactful remains another matter. This is, at the very least, a step in the right direction by a state that is keen to take digital security and freedom into its own hands. You can read the bill at the California Legislative information website. How Blockchain can level up IoT Security Defending your business from the next wave of cyberwar: IoT Threats
Read more
  • 0
  • 0
  • 15403

article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 15402

article-image-machine-learning-experts-on-how-we-can-use-machine-learning-to-mitigate-and-adapt-to-the-changing-climate
Bhagyashree R
18 Jun 2019
5 min read
Save for later

Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate

Bhagyashree R
18 Jun 2019
5 min read
Last week, a team of machine learning experts published a paper titled “Tackling Climate Change with Machine Learning”. The paper highlights how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. https://twitter.com/hardmaru/status/1139340463486320640 Climate change and its consequences are becoming more apparent to us day by day. And, one of the most significant ones is global warming, which is mainly caused by the emission of greenhouse gases. The report suggests that we can mitigate this problem by making changes to the existing electricity systems, transportation, buildings, industry, and land use. For adapting to the changing climate we need climate modeling, risk prediction, and planning for resilience and disaster management. This 54-page report lists various steps involving machine learning that can help us adapt and mitigate the problem of greenhouse gas emissions. In this article, we look at how machine learning and deep learning can be used for reducing greenhouse gas emissions from electricity systems: Electricity systems A quarter of human-caused greenhouse gas emissions come from electricity systems. To minimize this we need to switch to low-carbon electricity sources. Additionally, we should also take steps to reduce emissions from existing carbon-emitting power plants. There are two types of low-carbon electricity sources: variable and controllable: Variable sources Variable sources are those that fluctuate based on external factors, for instance, the energy produced by solar panels depend on the sunlight. Power generation and demand forecasting Though ML and deep learning methods have been applied to power generation and demand forecasting previously, it was done using domain-agnostic techniques. For instance, using clustering techniques on households or game theory, optimization, regression, or online learning to predict disaggregated quantities from aggregate electricity signals. This study suggests that future ML algorithms should incorporate domain-specific insights. They should be created using the innovations in climate modeling and weather forecasting and in hybrid-plus-ML modeling techniques. These techniques will help in improving both short and medium-term forecasts. ML models can be used to directly optimize for system goals. Improving scheduling and flexible demand ML can play an important role in improving the existing centralized process of scheduling and dispatching by speeding up power system optimization problems. It can be used to fit fast function approximators to existing optimization problems or provide good starting points for optimization. Dynamic scheduling and safe reinforcement learning can also be used to balance the electric grid in real time to accommodate variable generation or demand. ML or other simpler techniques can enable flexible demand by making storage and smart devices automatically respond to electricity prices. To provide appropriate signals for flexible demand, system operators can design electricity prices based on, for example, forecasts of variable electricity or grid emissions. Accelerated science for materials Many scientists are working to introduce new materials that are capable of storing or harnessing energy from variable natural resources more efficiently. For instance, solar fuels are synthetic fuels produced from sunlight or solar heat. It can capture solar energy when the sun is up and then store this energy for later use. However, coming up with new materials can prove to be very slow and imprecise. There are times when human experts do not understand the physics behind these materials and have to manually apply heuristics to understand a proposed material’s physical properties. ML techniques can prove to be helpful in such cases. They can be used to automate this process by combining “heuristics with experimental data, physics, and reasoning to apply and even extend existing physical knowledge.” Controllable sources Controllable sources can be turned on and off, for instance, nuclear or geothermal plants. Nuclear power plants Nuclear power plants are very important to meet climate change goals. However, they do pose some really significant challenges including public safety, waste disposal, slow technological learning, and high costs. ML, specifically deep networks can be used to reduce maintenance costs. They can speed up inspections by detecting cracks and anomalies from image and video data or by preemptively detecting faults from high-dimensional sensor and simulation data. Nuclear fusion reactors Nuclear fusion reactors are capable of producing safe and carbon-free electricity with the help of virtually limitless hydrogen fuel supply. But, right now they consume more energy that they produce. A lot of scientific and engineering research is still needed to be done before we can use nuclear fusion reactors to facilitate users. ML can be used to accelerate this research by guiding experimental design and monitoring physical processes. As nuclear fusion reactors have a large number of tunable parameters, ML can help prioritize which parameter configurations should be explored during physical experiments. Reducing the current electricity system climate impacts Reducing life-cycle fossil fuel emissions While we work towards bringing low-carbon electricity systems to society, it is important to reduce emissions from the current fossil fuel power generation. ML can be used to prevent the leakage of methane from natural gas pipelines and compressor stations. Earlier, people have used sensor and satellite data to proactively suggest pipeline maintenance or detect existing leaks. ML can be used to improve and scale the existing solutions. Reducing system waste As electricity is supplied to the consumers, some of it gets lost as resistive heat on electricity lines. While we cannot eliminate these losses completely, it can be significantly mitigated to reduce waste and emissions. ML can help prevent avoidable losses through predictive maintenance by suggesting proactive electricity grid upgrades. To know more in detail about how machine learning can help reduce the impact of climate change, check out the report. Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action? ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more
Read more
  • 0
  • 0
  • 15393
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-finally-ends-forced-arbitration-for-all-its-employees
Natasha Mathur
22 Feb 2019
4 min read
Save for later

Google finally ends Forced arbitration for all its employees

Natasha Mathur
22 Feb 2019
4 min read
Google announced yesterday that it is ending forced arbitration for its full-time employees as well as for the Temps, Vendors, and Contractors (TVCs) for cases of harassment, discrimination or wrongful termination. The changes will go into effect starting March 21 and employees will be able to litigate their past claims. Moreover, Google has also lifted the ban on class action lawsuits for the employees, reports WIRED. https://twitter.com/GoogleWalkout/status/1098692468432867328 In the case of contractors, Google has removed forced arbitration from the contracts of those who work directly with the firm. But, outside firms employing contractors are not required to follow the same. Google, however, will notify other firms and ask them to consider the approach and see if it works for them. Although this is very good news, the group called ‘Googlers for ending forced arbitration’ published a post on Medium stating that the “fight is not over”. They have planned a meeting with legislators in Washington D.C. for the next week, where six members of the group will advocate for an end to forced arbitration for all workers. “We will stand with Senators and House Representatives to introduce multiple bills that end the practice of forced arbitration across all employers. We’re calling on Congress to make this a law to protect everyone”, states the group. https://twitter.com/endforcedarb/status/1098697243517960194 It was back in November when 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment encountered at Google’s workplace. Google had waived forced arbitration for sexual harassment and assault claims, as a response to Google walkout (a move that was soon followed by Facebook), but employees were not convinced. Also, the forced arbitration policy was still applicable for contractors, temps, vendors, and was still in effect for other forms of discrimination within the firm. This was soon followed by Google contractors writing an open letter on Medium to Sundar Pichai, CEO, Google, in December, demanding him to address their demands of better conditions and equal benefits for contractors. Also, Googlers launched an industry-wide awareness campaign to fight against forced arbitration last month, where they shared information about arbitration on their Twitter and Instagram accounts throughout the day.  The employees mentioned in a post on Medium that there were “no meaningful gains for worker equity … nor any actual change in employee contracts or future offer letters”. The pressure on Google regarding more transparency around its sexual assault policies had been building up for quite a while. For instance, two shareholders, James Martin, and two other pension funds sued Alphabet’s board members, last month, for protecting the top execs accused of sexual harassment. The lawsuit urged for more clarity surrounding Google’s policies. Similarly, Liz Fong Jones, developer advocate at Google Cloud platform, revealed earlier last month, that she was leaving Google due to its lack of leadership in case of the demands made by employees during the Google walkout. Jones also published a post on Medium, last week, where she talked about the ‘grave concerns’ she had related to the strategic decisions made at Google.   “We commend the company in taking this step so that all its workers can access their civil rights through public court. We will officially celebrate when we see these changes reflected in our policy websites and/or employment agreements”, states the end forced arbitration group. Public reaction to the news is largely positive, with people cheering on Google employees for the victory: https://twitter.com/VidaVakil/status/1098773099531493376 https://twitter.com/jas_mint/status/1098723571948347392 https://twitter.com/teamcoworker/status/1098697515858182144 https://twitter.com/PipelineParity/status/1098721912111464450 Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus
Read more
  • 0
  • 0
  • 15392

article-image-as-us-china-tech-cold-war-escalates-google-revokes-huaweis-android-support-allows-only-those-covered-under-open-source-licensing
Sugandha Lahoti
20 May 2019
4 min read
Save for later

As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing

Sugandha Lahoti
20 May 2019
4 min read
Update: On Wednesday, according to a leaked memo received by BBC, UK-based chip designer ARM has told staff it must suspend business with Huawei. Also, BT Group Plc won’t offer phones from Huawei when it starts Britain’s first 5G mobile network next week. A number of wireless operators are ditching Huawei’s handsets. On Monday, The U.S. Commerce Department granted a 90-day license for mobile phone companies and internet broadband providers to work with Huawei allowing Google to send software updates to Huawei phones which use its Android operating system till August 19. As of 20th May, the U.S. government temporarily minimized some trade restrictions on Huawei, to help the company’s customers around the world. The U.S. Commerce Department will allow Huawei Technologies to purchase American-made goods in order to maintain existing networks and provide software updates to existing Huawei handsets. According to a report by Reuters, Google has suspended all business with Huawei that requires the transfer of hardware, software and technical services. Huawei will also be limited from getting updates to Google’s Android operating system. They will only be able to use the public version of Android (known as the Android Open Source Project (AOSP). Huawei will have to create their own update mechanism for security patches. Future versions of Huawei smartphones that run on Android will also lose access to popular services, including the Google Play Store and Gmail and YouTube apps said Reuters. However, the impact is expected to be minimal in the Chinese market, considering most Google mobile apps are already banned in China. This also means that alternatives offered by domestic competitors such as Tencent and Baidu may see a rise in popularity. https://twitter.com/asymco/status/1130397070181916672 Holders of current Huawei smartphones with Google apps, however, will continue to be able to use and download app updates provided by Google, a Google spokesperson told Reuters. They further added, “We are complying with the order and reviewing the implications. For users of our services, Google Play and the security protections from Google Play Protect will continue to function on existing Huawei devices." https://twitter.com/Android/status/1130313848332988421 Per a Bloomberg report, chipmakers including Intel, Qualcomm, Xilinx, and Broadcom have told their employees they will not supply Huawei till further notice. This can also disrupt the businesses of American chip giants and slow down the rollout of critical 5G wireless networks worldwide -- including in China. Last week the FCC voted unanimously to deny China Mobile’s bid to provide US telecommunications services. Huawei suspension from Google comes after The Trump administration added the Chinese telecom giant to trade blacklist, last week. The Commerce Department said by adding Huawei Technologies and its 70 affiliates under this list means it will ban the company from acquiring components and technology from US firms without government approval. President Donald Trump has taken this decision to “prevent American technology from being used by foreign-owned entities in ways that potentially undermine US national security or foreign policy interests”, said US Secretary Wilbur Ross in a statement. The order signed by the President did not specify any country or company but, US officials have previously labeled Huawei a “threat” and actively lobbied allies not to use Huawei network equipment in next-generation 5G networks. Huawei’s ban was not received well by the public especially those with Huawei devices. This is a lose-lose situation for both companies, short term, this hurts Huawei, long term this hurts Android. The news of the US ban did not sit well with Chinese citizens as well.  Per a report by Buzzfeed, people in China are calling for a boycott of Apple products. In February, Huawei was accused of stealing Apple’s trade secrets. Per Buzzfeed, many people took to Weibo, China’s popular social media platform to speak against Apple. “The functions in Huawei are comparable to Apple iPhones or even better. We have such a good smartphone alternative, why are we still using Apple?” commented one user.” “I think Huawei’s branding is amazing, it chops an apple into eight pieces,” said another post, describing the company's spliced, red logo. On Twitter, people openly criticized Google’s move as well as the US ban. https://twitter.com/FearbySoftware/status/1130234526137966592 https://twitter.com/iainthomson/status/1130232015276535808 The U.S. China cold war has escalated to become a messy trade war. Now, China faces incremental pressure to build its own smartphone operating system, design its own chips, develop its own semiconductor technology, and implement its own technology standards. https://twitter.com/tomwarren/status/1130229043272531968 US blacklist China’s telecom giant Huawei over threat to national security Elite US universities including MIT and Stanford break off partnerships with Huawei. China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information
Read more
  • 0
  • 0
  • 15392

article-image-mozilla-ceo-chris-beard-to-step-down-by-the-end-of-2019-after-five-years-in-the-role
Bhagyashree R
30 Aug 2019
3 min read
Save for later

Mozilla CEO Chris Beard to step down by the end of 2019 after five years in the role

Bhagyashree R
30 Aug 2019
3 min read
Yesterday, Chris Beard, the CEO of Mozilla, announced that he will be stepping down from his role by the end of this year. After an astounding tenure of more than fifteen years at Mozilla, Beard’s immediate plans are to take a break and spend more time with his family. https://twitter.com/cbeard/status/1167091991487729664 Chris Beard’s journey at Mozilla started back in 2004, just before Firefox 1.0 was released. Since then he has been deeply involved in almost every part of the business including product, marketing, innovation, communications, community, and user engagement. In 2013, Beard worked as an Executive-in-Residence at the venture capital firm Greylock Partners, gaining a deeper perspective on innovation and entrepreneurship. During this time he remained an advisor to Mozilla’s Chair, Mitchell Baker. Chris Beard’s appointment as CEO came during a very “tumultuous time” for Mozilla. In 2013, when Gary Kovacs stepped down as Mozilla’s CEO, the company was extensively looking for a new CEO. In March 2014, the company appointed its CTO Brendan Eich, the creator of JavaScript as CEO. Just a few weeks in the role, Eich had to resign from his position after it was revealed that he has donated $1,000 to California Proposition 8, which called for the banning of same-sex marriage in California. Then in April 2014, Chris Beard was appointed as the interim CEO at Mozilla and was confirmed in the position on July 28. Throughout his tenure as a “Mozillian”, Chris Beard has made countless contributions to the company. Listing his achievements, Mozilla’s Chair, Mitchell Baker wrote in a thanking post, “This includes reinvigorating our flagship web browser Firefox to be once again a best-in-class product. It includes recharging our focus on meeting the online security and privacy needs facing people today. And it includes expanding our product offerings beyond the browser to include a suite of privacy and security-focused products and services from Facebook Container and Enhanced Tracking Protection to Firefox Monitor.” Read also: Firefox now comes with a Facebook Container extension to prevent Facebook from tracking user’s web activity Mozilla is now seeking a successor for Beard to lead the company. Mitchell Baker has agreed to step into an interim CEO role if the search continues beyond this year. Meanwhile, Chris Beard will continue to be an advisor to the board of directors and Baker. “And I will stay engaged for the long-term as an advisor to both Mitchell and the Board, as I’ve done before,” he wrote. Many of Beard’s co-workers thanked him for his contribution to Mozilla: https://twitter.com/kaykas/status/1167094792230076424 https://twitter.com/digitarald/status/1167107776734085120 https://twitter.com/hoosteeno/status/1167099338226429952 You can read Beard’s announcement on Mozilla’s blog. What’s new in web development this week Mozilla proposes WebAssembly Interface Types to enable language interoperability Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 Mozilla’s MDN Web Docs gets new React-powered frontend, which is now in Beta
Read more
  • 0
  • 0
  • 15383

article-image-puppets-2019-state-of-devops-report-highlight-security-integration-into-devops-practices-result-into-higher-business-outcome
Amrata Joshi
27 Sep 2019
5 min read
Save for later

Puppet’s 2019 State of DevOps Report highlight security integration into DevOps practices result into higher business outcome

Amrata Joshi
27 Sep 2019
5 min read
On Wednesday, Puppet announced the findings of its eighth annual State of DevOps Report. This report reveals practices and patterns that can help organisations in integrating security into the software development lifecycle. As per Puppet’s 2019 State of DevOps Report, 22% of the firms at the highest level of security integration has reached an advanced stage of DevOps maturity, while 6% of the firms are without security integration.  While talking about the firms with an overall ‘significant to full’ integration status, according to the report findings, Europe is ahead of the Asia Pacific regions and the US with 43% in contrast to 38% or less. Alanna Brown, Senior Director of Community and Developer Relations at Puppet and author of the State of DevOps report, said, “The DevOps principles that drive positive outcomes for software development — culture, automation, measurement and sharing — are the same principles that drive positive security outcomes. Organisations that are serious about improving their security practices and posture should start by adopting DevOps practices.”  Brown added, “This year's report affirms our belief that organisations who ignore or deprioritise DevOps, are the same companies who have the lowest level of security integration and who will be hit the hardest in the case of a breach.” Key findings of State of the DevOps Report 2019 According to the report, firms that are at the highest level of security integration can deploy to production on-demand at a higher rate as compared to firms at all other levels of integration. Currently, 61% of firms are able to do so and while comparing with organisations that have not integrated security at all, less than half (49%) can deploy on-demand. According to 82% of survey respondents at firms with the highest level of security integration, security practices and policies to improve their firm’s security posture. While comparing this with respondents at firms without security integration, only 38% had the level of confidence. Firms that are integrating security throughout their lifecycle are more than twice as likely to stop a push to production for a medium security vulnerability. In the middle stages of evolution of security integration, delivery and security teams experience higher friction while collaborating where software delivery slows down and the audit issues increase. The report findings state that friction is higher for respondents who work in security jobs than those who work in non-security jobs. But if they continue working, they will get the results of their hard work faster. Hypothesis on remediation time As per the hypothesis, just 7% of the total respondents can remediate a critical vulnerability in less than an hour.  32% of the total respondents can remediate in one hour to less than one day.  33% of the total respondents can remediate in one day to less than one week.   Michael Stahnke, VP of Platform Engineering, CircleCI, said, “It shouldn’t be a surprise to anyone that integrating security into the software delivery lifecycle requires intentional effort and deep collaboration across teams.” Stahnke added, “What did surprise me, however, was that the practices that promote cross-team collaboration had the biggest impact on the teams’ confidence in the organisation’s security posture. Turns out, empathy and trust aren’t automatable.” Factors responsible for the success of an organizational structure to be DevOps ready The flexibility of the current organizational structure. The organizational culture.  How isolated the different functions are.  Skillsets of your team.  The relationship between team leaders and teams. Best practices for improving security posture Development and security teams collaborate on threat models. Security tools are integrated in the development integration pipeline such that the engineers feel confident that they are not involving any known security problems into their codebases. Security requirements, both functional as well as non-functional should be prioritised as part of the product backlog. Security experts should evaluate automated tests and review changes in high-risk areas of the code like cryptography, authentication systems, etc. Before the deployment, infrastructure-related security policies should be reviewed. Andrew Plato, CEO, Anitian, said, “Puppet’s State of DevOps report provides outstanding insights into the ongoing challenges of integrating security and DevOps teams.”  Plato added, “While the report outlines many problems, it also highlights the gains that arise when DevOps and security are fully integrated. These benefits include increased security effectiveness, more robust risk management, and tighter alignment of business and security goals. These insights mirror our experiences at Anitian implementing our security automation platform. We are proud to be a sponsor of the State of DevOps report as well as a technology partner with Puppet. We anticipate referencing this report regularly in our engagement with our customers as well as the DevOps and security communities.” To summarize, organizations that are focusing on improving their security posture and practices should adopt DevOps practices just as the organizations at the highest levels of DevOps acceptance have fully integrated security practices.  Check out the complete 2019 State of DevOps Report here. Other interesting news in cloud & networking GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020  
Read more
  • 0
  • 0
  • 15380
article-image-red-hat-announces-the-general-availability-of-red-hat-openshift-service-mesh
Amrata Joshi
27 Aug 2019
3 min read
Save for later

Red Hat announces the general availability of Red Hat OpenShift Service Mesh

Amrata Joshi
27 Aug 2019
3 min read
Last week, the team at Red Hat, a provider of enterprise open source solutions announced the general availability of Red Hat OpenShift Service Mesh for connecting, managing, observing and simplifying service-to-service communication of Kubernetes applications on Red Hat OpenShift 4.  The OpenShift Service Mesh is based on Istio, Kiali and Jaeger projects and is designed to deliver end-to-end developer experience around microservices-based application architectures. It manages the network connections between the containerized applications and decentralized applications. And eases the complex tasks of implementing bespoke networking services for applications and business logic.  Larry Carvalho, research director, IDC said in a statement to Business Wire, “Service mesh is the next big area of disruption for containers in the enterprise because of the complexity and scale of managing interactions with interconnected microservices. Developers seeking to leverage Service Mesh to accelerate refactoring applications using microservices will find Red Hat’s experience in hybrid cloud and Kubernetes a reliable partner with the Service Mesh solution." Developers can now improve the implementation of microservice architectures by natively integrating service mesh into the OpenShift Kubernetes platform. The OpenShift Service Mesh improves traffic management by including service observability and visualization of the mesh topology.  Ashesh Badani, Red Hat's senior VP of Cloud Platforms, said in a statement, "The addition of Red Hat OpenShift Service Mesh allows us to further enable developers to be more productive on the industry's most comprehensive enterprise Kubernetes platform by helping to remove the burdens of network connectivity and management from their jobs and allowing them to focus on building the next-generation of business applications." Features of Red Hat OpenShift Service Mesh Tracing OpenShift Service Mesh features tracing that uses Jaeger which is an open, distributed tracing system. Tracing helps developers in tracking a request between services. Tracing also helps in providing an insight into the request process right from the start till the end.  Visualization and observability  This Service Mesh also provides an easier way to view its topology and observe how the services interact. Visualization helps in understanding how the services are managed and how the traffic is flowing in near-real time which helps in easier management and troubleshooting. Service Mesh installation and configuration  OpenShift Service Mesh features “One-click” Service Mesh installation and configuration with the help of Service Mesh Operator and an Operator Lifecycle Management framework. The developers can deploy applications into a service mesh more easily. A Service Mesh Operator deploys Istio, Jaeger and Kiali together minimizes management burdens and automates tasks such as installation, service maintenance as well as lifecycle management. Developed with open projects OpenShift Service Mesh is developed with open projects and is built in collaboration with leading members of the Kubernetes community. Increases productivity of the developers The Service Mesh integrates communication policies without making changes to the application code or integrating language-specific libraries. To know more about Red Hat OpenShift Service Mesh, check out the official website. Red Hat joins the RISC-V foundation as a Silver level member Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Red Hat rebrands logo after 20 years; drops Shadowman
Read more
  • 0
  • 0
  • 15379

article-image-bodhi-linux-5-0-0-released-updated-ubuntu-core-modern-look
Sugandha Lahoti
24 Aug 2018
2 min read
Save for later

Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look

Sugandha Lahoti
24 Aug 2018
2 min read
The Bodhi Team have announced the fifth major release of their Linux distribution. Bodhi Linux 5.0.0 comes with an updated Ubuntu core 18.04 and an overall modern look for its Moksha Window Manager. Bodhi Linux was first released as a stable version seven years ago, as a lightweight Linux distribution based on Ubuntu and Moksha window manager. It uses a minimal base system allowing users to populate it with the software of their choice. Bodhi Linux 5.0.0 features disc images which have a fresh new look; a modified version of the popular 'Arc Dark' theme colorized in Bodhi Green. They have also included a fresh default wallpaper, login screen, and splash scenes as your system boots. Bodhi Linux Default Desktop - Busy Bodhi Linux Desktop - Clean The Bodhi team have not provided a change log because the move to an Ubuntu 18.04 base from 16.04 is the only major difference. Ubuntu 18.04 comes with changes such as Better metric collection in Ubuntu Report Support for installing on NVMe with RAID1 Fix for a typo that made update-manager report crash Miscellaneous unattended-upgrade fixes Ubuntu welcome tool now mentions dock and notifications Patches to make audio work on Lenovo machines with dual audio codecs Restore New Tab menu item in GNOME Terminal New “Thunderbolt” panel in Settings app If you installed a pre-release of Bodhi 5.0.0 you will simply need to run your system updates for the latest ISO images. However, the system updates will not adjust the look of your desktop automatically. If you have a previous Bodhi release installed you will need to do a clean install to upgrade to Bodhi 5.0.0. Bodhi 4.5.0 will have support until Ubuntu 16.04 runs out in April 2021. You can read more about the Bodhi 5.0.0 release on Bodhi Linux Blog. What to expect from upcoming Ubuntu 18.04 release. Is Linux hard to learn? Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available.
Read more
  • 0
  • 0
  • 15378

article-image-google-introduces-cloud-hsm-beta-hardware-security-module-for-crypto-key-security
Prasad Ramesh
23 Aug 2018
2 min read
Save for later

Google introduces Cloud HSM beta hardware security module for crypto key security

Prasad Ramesh
23 Aug 2018
2 min read
Google has rolled out a beta of its Cloud hardware security module aimed at hardware cryptographic key security. Cloud HSM allows better security for customers without them having to worry about operational overhead. Cloud HSM is a cloud-hosted hardware security module that allows customers to store encryption keys. Federal Information Processing Standard Publication (FIPS) 140-2 level 3 security is used in the Cloud HSM. FIPS is a U.S. government security standard for cryptographic modules under non-military use. This standard is certified to be used in financial and health-care institutions. It is a specialized hardware component designed to encrypt small data blocks contrary to larger blocks that are managed with Key Management Service (KMS). It is available now and is fully managed by Google, meaning all the patching, scaling, cluster management and upgrades will be done automatically with no downtime. The customer has full control of the Cloud HSM service via the Cloud KMS APIs. Il-Sung Lee, Product Manager at Google, stated: “And because the Cloud HSM service is tightly integrated with Cloud KMS, you can now protect your data in customer-managed encryption key-enabled services, such as BigQuery, Google Compute Engine, Google Cloud Storage and DataProc, with a hardware-protected key.” In addition to Cloud HSM, Google has also released betas for asymmetric key support for both Cloud KMS and Cloud HSM. Now users can create a variety of asymmetric keys for decryption or signing operations. This means that users can now store their keys used for PKI or code signing in a Google Cloud managed keystore. “Specifically, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 keys will be available for signing operations, while RSA 2048, RSA 3072, and RSA 4096 keys will also have the ability to decrypt blocks of data.” For more information visit the Google Cloud blog and for HSM pricing visit the Cloud HSM page. Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Machine learning APIs for Google Cloud Platform Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 15373
article-image-eclipse-ides-photon-release-will-support-rust
Pavan Ramchandani
29 Jun 2018
2 min read
Save for later

Eclipse IDE’s Photon release will support Rust

Pavan Ramchandani
29 Jun 2018
2 min read
Eclipse Foundation announced the release of Photon release of Eclipse IDE. Also with this release, the community announced the support for Rust language. This support will give a native Eclipse IDE working experience for Rust developers. Eclipse IDE has been known for providing the IDE support and the learning demands for the Rust community. This release marks the thirteenth annual simultaneous release of Eclipse. The important features in the Photon release as follows: Full Eclipse IDE support for building, debugging, running, and packaging Rust applications and giving a good user experience for Rust development. More support for C# for editing and debugging codes, this includes syntax coloring, autocomplete suggestions, diagnostics, and navigation. The Photon release has added some more frameworks to the IDE such as RedDeer (framework for building automated test), Yasson (Java framework for providing binding with JSON documents), JGit (Git for Java), among others. It also comes with some more updates and features for dynamic language toolkit, Eclipse Modeling Framework (EMF), PHP development tools, C/C++ development tools, tools for Cloud Foundry, dark theme and improvement in background color and popup dialogs. Eclipse foundation has also introduced, what they called Language Server Protocol (LSP), with the Photon release. WIth the LSP based release, Eclipse will deliver support for popular and emerging languages in the IDE. With the normal release cycle, LSP will focus on keeping pace with the emerging tools and technologies andon the developers and their commercial needs in their future releases. For more information on the Photon project and contributing to the Eclipse community, you can check out the Eclipse Meetup event. Read more What can you expect from the upcoming Java 11 JDK? Perform Advanced Programming with Rust The top 5 reasons why Node.js could topple Java
Read more
  • 0
  • 0
  • 15370

article-image-facebook-retires-its-open-source-contribution-to-nuclide-atom-ide-and-other-associated-repos
Bhagyashree R
13 Dec 2018
3 min read
Save for later

Facebook retires its open source contribution to Nuclide, Atom IDE, and other associated repos

Bhagyashree R
13 Dec 2018
3 min read
Yesterday, the Facebook Open Source team announced that they will no longer be able to contribute to the open source development of the Nuclide extension, Atom IDE, and other associated repos. https://twitter.com/fbOpenSource/status/1072928679695548416 Nuclide is a code editor built as a suite of features on top of the Atom text editor to provide hackability and the support of an active community. Facebook developed Nuclide to provide a first-class unified development environment for React Native, Hack, and Flow projects. Nuclide was first created for Facebook’s internal engineers and then was later open sourced in the hopes that others could also benefit from it too. In their announcement, Facebook told that this decision was made because they were not able to pay much attention to the project. They added, “However, our team has not been able to give this project the amount of attention and responsiveness it deserves and as a result, we’ve made the difficult decision to retire Nuclide and associated repos, such as the Atom-IDE packages.” Though they are not going to contribute to the Nuclide open source project, Facebook will continue to use it internally: https://twitter.com/amasad/status/1072930703065501696 The latest release, that is, Nuclide 0.366 will be the last release by Facebook. They have made its source code available in the Facebook Open Source Archive. The language and debugging services will still be supported in Atom and other compatible IDEs such as Microsoft Visual Studio Code or the clients listed on Langserver.org. Users on Hacker News are speculating that maybe this is the time to adopt VSCode and the main reason is that it provides good integration with TypeScript. Here’s what a user said, “A shame, in an ideal world there would be the benefit of outside contributions that made less internal work needed, so overall would be a win for Facebook. But probably this is related to Atom itself being taken over by VSCode, the number of users (and maybe contributors) appears to be going down.” Read the official announcement by Facebook on Nuclide’s website. Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? Facebook AI research and NYU school of medicine announces new open-source AI models and MRI dataset as part of their FastMRI project Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation, and hate speech on the platform
Read more
  • 0
  • 0
  • 15366
Modal Close icon
Modal Close icon