Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-haskell-is-moving-to-gitlab-due-to-issues-with-phabricator
Prasad Ramesh
03 Dec 2018
3 min read
Save for later

Haskell is moving to GitLab due to issues with Phabricator

Prasad Ramesh
03 Dec 2018
3 min read
The Haskell functional programming language is moving from Phabricator to GitLab. Last Saturday, Haskell Consultant Ben Gamari listed down some details about the move in a mail. It started with a proposal to move to GitLab A few weeks back, Gamari wrote to the Haskell mailing list about moving the Glasgow Haskell Compiler (GHC) development infrastructure to GitLab. The original proposal wasn’t complete enough to be used but did provide a small test instance to experiment on. The staging URL https://gitlab.staging.haskell.org is ready to use. While this is not the final version of the migration, it does have most of the features a user would expect. Trac tickets are fully imported, including attachments Continuous integration (CI) is available via CircleCI The mirrors of all boot libraries are present Users can also login using their GitHub credentials if they choose to Issues in the migration There are also a few issues listed by Gamari that needs to be worked on: Timestamps associated with ticket open and close events aren't accurate Some of the milestone changes have problems on being imported Currently, CircleCI fails when forked Trac Wiki pages aren’t imported as of now Gamari said that the listed issues have either been resolved in the import tool or are in-progress to be resolved. The goal of this staging instance is to let contributors gain experience using GitLab and identify any obstacles in the eventual migration. Developers need to note that any comments, merge requests, or issues created on the temporary instance may not be preserved. The focus is on identifying workflows that will become harder under GitLab and ways to improve on them, pending issues in importing Trac, and areas that do not have documentation. Why the move to GitLab? The did not choose GitHub as stated by Gamari in another mail: “Its feature set is simply insufficient enough to handle the needs of a larger project like GHC”. The move to GitLab is due to a number of reasons. Phacility, the company that owns Phabricator has now closed support to non paying customers As Phalicity now focuses on paying customers, open-source parts used by GHC seem half finished Phabricator tool Harbormaster causing breaking CI Their surveys indicated developers leaning towards Git rather than the PHP tool Arcanist used by Phabricator The final migration will happen in about two weeks and the date mentioned is December 18. For more details, you can follow the Haskell mailing list. What makes functional programming a viable choice for artificial intelligence projects? GitLab open sources its Web IDE in GitLab 10.7 GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub
Read more
  • 0
  • 0
  • 15090

article-image-flutter-challenges-electron-soon-to-release-a-desktop-client-to-accelerate-mobile-development
Bhagyashree R
03 Dec 2018
3 min read
Save for later

Flutter challenges Electron, soon to release a desktop client to accelerate mobile development

Bhagyashree R
03 Dec 2018
3 min read
On Saturday, the Flutter team announced that, as a competition to Electron, they will soon be releasing native desktop client to accelerate mobile development. Flutter native desktop client will come with support for resizing the emulator during runtime, using assets from PC, better RAM usage, and more. Flutter is Google’s open source mobile app SDK, which enables developers to write once and deploy natively on different platforms such as Android, iOS, Windows, Mac, and Linux. Additionally, they can also share the business logic to the web using AngularDart. Here’s what Flutter for desktop brings in: Resizable emulator during runtime To check how your layout looks on different screen sizes you need to create different emulators, which is quite cumbersome. To solve this issue Flutter desktop client will provide resizable emulator. Use assets saved on your PC When working with apps that interact with assets on the phone, developers have to first move all the testing files to the emulator or the device. With this desktop client, you can simply pick the file you want with your native file picker. Additionally, you don’t have to make any changes to the code as the desktop implementation uses the same method as the mobile implementation. Hot reloads and debugging Hot reload and debugging allows quickly experimenting, building UIs, adding new features, and fixing bugs. The desktop client supports these capabilities for better productivity. Better RAM usage The Android emulator consumes up to 1 GM RAM, but the RAM usage becomes worse when you are running the IntelliJ and the ever-RAM-hungry Chrome. Since the embedder is running natively, there will be no need for Android. Universally usable widgets You will be able to universally use most of the widgets such as buttons, loading indicators, etc that you create. And those widgets that require a different look as per the platform can be encapsulated pretty easily but checking the TargetPlatfrom property. Pages and plugins Pages differ in layout, depending on the platform and screen sizes, but not in functionality. You will be able to easily create accurate layouts for each platform with PageLayoutWidget. With regards to plugins, you do not have to make any changes in the Flutter code when using a plugin that also supports the desktop embedder. The Flutter desktop client is still in alpha, which means there will be more changes in the future. Read the official announcement on Medium. Google Flutter moves out of beta with release preview 1 Google Dart 2.1 released with improved performance and usability JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript
Read more
  • 0
  • 0
  • 20914

article-image-an-update-on-bcachefs-the-next-generation-linux-filesystem
Melisha Dsouza
03 Dec 2018
3 min read
Save for later

An update on Bcachefs- the “next generation Linux filesystem”

Melisha Dsouza
03 Dec 2018
3 min read
Kent Overstreet announced Bcachefs as “the COW filesystem for Linux that won't eat your data" in 2015. Since then the system has undergone numerous updates and patches to get to be where it is today. On the 1st of December, Overstreet published an update on the problems and improvements that are currently being worked upon in Bcachefs. Status update on Bcachefs After the last update, Overstreet has focussed on two major areas of improvement- atomicity of filesystem operations and non-persistence of allocation information (per bucket sector counts). The filesystem operations that had anything to do with i_nlink were not atomic. On startup, the system would have to scan and recalculate i_nlink and also delete no longer referenced inodes. Also, because of non-persistence of allocation information, on startup, the system would have to recalculate all the accounting disk space. The team has now been able to get everything to be fully atomic except for fallocate/fcollapse/etc. After an unclean shutdown, the only thing to be done is scan the inodes btree for inodes that have been deleted. Erasure coding is about 80% done now in Bcachefs. Overstreet is now focussed on persistent allocation information. This will then allow him to focus on ‘reflink’ which in turn will be useful to the company that's funding bcachefs development. This is because the reflinked extent refcounts will be much too big to keep in memory and hence will l have to be kept in a btree and updated whenever doing extent updates. The infrastructure needed to make that happen also depends on making disk space accounting persistent. After all of these updates, he claims bcachefs will have fast mounts (including after unclean shutdown). He is also working on some improvements to disk space accounting for multi-device filesystems which will lead up to fast mounts after clean Shutdowns. To know if a user can safely mount in degraded mode, they will have to store a list of all the combinations of disks that have data replicated across them (or are in an erasure coded stripe) - without any kind of fixed layout, like regular RAID does. Why should you choose Bcachefs? Overstreet announced that Bcachefs is stable, fast, and has a small and clean code-base, along with  the necessary features to be a modern Linux file-system. It has a long list of features, completed or in progress: Copy on write (COW) - like zfs or btrfs Full data and metadata checksumming Caching Compression Encryption Snapshots Scalable Bcachefs prioritizes robustness and reliability According to Kent, Bcachefs ensures that customers won't lose their data. The Bcachefs is an extension of bcache where the bcache was designed as a caching layer to improve block I/O performance. It uses a solid-state drive as a cache for a (slower, larger) underlying storage device. Mainline bcache is not a typical filesystem but looks like a special kind of block device. It handles the movement of blocks of data between fast and slow storage, ensuring that the most frequently used data is kept on the faster device. bcache manages data in a way that yields high performance while ensuring that no data is ever lost, even when an unclean shutdown takes place. You can head over to LKML.org for more information on this announcement. Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project  
Read more
  • 0
  • 0
  • 16350

article-image-microsoft-wins-480-million-us-army-contract-for-hololens
Natasha Mathur
30 Nov 2018
3 min read
Save for later

Microsoft wins $480 million US Army contract for HoloLens

Natasha Mathur
30 Nov 2018
3 min read
Microsoft won a $480 million contract, earlier this week, to develop and supply prototypes for augmented reality systems, for use in combat and military training for the US army. The project ‘Integrated Visual Augmentation system’ (IVAS) (formerly identified as Heads Up Display (HUD) 3.0) aims to rapidly develop, test and manufacture a single platform for soldiers to fight, rehearse, and train. It will also offer increased lethality, Mobility, and Situational Awareness. The system would also provide remote viewing of weapon sights to enable low risk, rapid target acquisition, and will integrate both thermal and night vision cameras. Moreover, it will also be capable of tracking a soldier’s heart and breathing rates along with detecting concussions. This contract will have the military ordering an initial run of 2,550 prototypes, along with them buying more than 100,000 of these devices. As per the FBO (Federal Business Opportunities), Close Combat Force goes through the highest casualty rate in combat. Current and future battles are going to be fought in urban and subterranean environments where the current capabilities are not sufficient. IVAS will efficiently address this issue by providing increased sets and repetitions in complex environments that make use of its STESquad Capability integrated with HUD 3.0. “Soldier lethality will be vastly improved through cognitive training and advanced sensors, enabling squads to be first to detect, decide, and engage” reads the white paper.  HUD 3.0 will offer integration of Head-Body-Weapon and provide significant enhancement of detection, targeting, engagements, and AI to match with the speed of the war. The STE Squad capability provides global terrain replication of operational environments for close combat training and rehearsals before them actually engaging in such an environment. Generally, dismounted training relies on computer/projector screens that restrict Soldier movement severely. The new STE Squad capability brings together the live and virtual environments, thereby, developing an enhanced live training capability using operationally worn HUD 3.0. The U.S. Army and the Israeli military have already been customers of Microsoft’s HoloLens devices that they have used in training. With the contract, the Army has become one of Microsoft’s top and most important HoloLens consumers. Magic Leap was also trying to win the contract that would have been a part of  $500+ million Army program. “Augmented reality technology will provide troops with more and better information to make decisions. This new work extends our longstanding, trusted relationship with the Department of Defense to this new area,” mentioned a Microsoft spokesman in an email. For more information, check out the official IVAS white paper. Microsoft’s move towards ads on the Mail App in Windows 10 sparks privacy concerns Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool
Read more
  • 0
  • 0
  • 13472

article-image-amazon-rekognition-faces-more-scrutiny-from-democrats-and-german-antitrust-probe
Melisha Dsouza
30 Nov 2018
4 min read
Save for later

Amazon Rekognition faces more scrutiny from Democrats and German antitrust probe

Melisha Dsouza
30 Nov 2018
4 min read
On Thursday, a group of seven House Democrats sent a letter to Amazon CEO Jeff Bezos demanding further details about AWS Rekognition, Amazon’s controversial facial recognition technology. This is the third letter and like those before it, it raises concerns and questions about Rekognition’s accuracy and the possible effects it might have on citizens. The very first letter was sent late in July.  Amazon responded in August with a diplomatic yet unsatisfactory letter of their own that failed to provide much detail. A second letter was then sent in November. According to the congressmen, the second letter was sent because “Amazon didn’t give sufficient answers.”in their initial response. The initial inquiry was timed around a an ACLU report that found Rekognition—software the company has sold to law enforcement and pitched for use by Immigration and Customs Enforcement—had incorrectly matched the faces of 28 members of Congress with individuals included in a database of mugshots. Amazon employees then signed a June letter to senior management, demanding that they cancel ongoing Rekognition contracts with law enforcement agencies. Rep. Jimmy Gomez told BuzzFeed News in an interview “If there’s a problem with this technology, it could have a profound impact on livelihoods and lives. There are no checks or balances on the tech that’s coming out- and this is in the hands of law enforcement.” Written by Sen. Edward Markey and Reps. Gomez, Luis Gutiérrez, Ro Khanna, among others, the letter reprimands Amazon for “[failing] to provide sufficient answers” to the previous two letters sent by the House Democrats. It also raises additional concerns based on “newly available information” — specifically BuzzFeed News’s  investigation into how the Orlando Police Department uses the tech, as well as a report that Amazon had actively marketed the tech to US Immigration and Customs Enforcement. The House Democrats also wrote in today’s letter  about their concern regarding Rekognition’s “accuracy issues.” They write that they “have serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color”. There are also further questions around whether Amazon will build privacy protections into its facial recognition system, and how it will ensure it is not abused for secret government surveillance. As first reported by Gizmodo, AWS CEO Andy Jassy first addressed employee concerns at an all-hands meeting earlier this month. At that meeting, he cited the software’s Terms of Service as the core roadblock to potential abuses. At Amazon’s re:Invent conference, Jassy said that “Even though we haven’t had a reported abuse case, we’re very aware people will be able to do things with these services that could do harm.” Amazon continues to sell this potentially harmful software regardless.Lawmakers closed today’s letter with specific question about the operation and bias of Rekognition, and they’re giving Amazon a strict reply deadline of December 13. You can head over to BuzzFeed News to read the entire letter. Germany adds antitrust probe in addition to EU’s scrutiny on Amazon On Thursday, Germany's antitrust agency said that it has begun an investigation of Amazon over complaints that it is abusing its position to the detriment of sellers who use its "marketplace" platform. "Because of the many complaints we have received we will examine whether Amazon is abusing its market position to the detriment of sellers active on its marketplace," said agency head Andreas Mundt. "We will scrutinize its terms of business and practices toward sellers." This investigation adds to the EU’s scrutiny of the company’s information gathering practices.. Amazon's "double role as the largest retailer and largest marketplace has the potential to hinder other sellers on its platform” said the Federal Cartel Office. Amazon said it could not comment on ongoing proceedings but said that "We will cooperate fully with the Bundeskartellamt and continue working hard to support small and medium-sized businesses and help them grow". AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developer
Read more
  • 0
  • 0
  • 12151

article-image-aviatrix-introduces-aviatrix-orchestrator-to-provide-powerful-orchestration-for-aws-transit-network-gateway-at-reinvent-2018
Bhagyashree R
30 Nov 2018
2 min read
Save for later

Aviatrix introduces Aviatrix Orchestrator to provide powerful orchestration for AWS Transit Network Gateway at re:Invent 2018

Bhagyashree R
30 Nov 2018
2 min read
Yesterday, at Amazon re:Invent, Aviatrix, a tool that helps users manage cloud deployments, announced and demonstrated Aviatrix Orchestrator. This new feature will make connecting multiple networks much easier. Essentially, it unifies the management of both AWS native networking services and Aviatrix services via a single management console. How does Aviatrix Orchestrator support AWS Transit Gateway? AWS Transit Gateway helps customers to interconnect virtual private clouds and their on-premises networks to a single gateway. Users only need to create and manage a single connection from the central gateway to each Amazon VPC, on-premises data center, or remote office across your network. It basically acts as a hub that controls how traffic is routed among all the connected networks which act like spokes. Aviatrix Orchestrator adds an automation layer to AWS Transit Gateway that allows users to provision and implement route domains securely and accurately. Users can automatically configure and propagate segmentation policies and leverage built-in troubleshooting and visualization tools for monitoring the entire environment. Some of the advantages of combining Aviatrix Orchestrator and AWS Transit Gateway include: Ensuring your AWS network follows virtual private cloud  segmentation best practices Limiting lateral movement in the event of a security breach Reducing the impact of human error by removing the need for potentially tedious manual configuration. Minimizing the blast radius that can result from misconfigurations. Replacing a flat architecture with a transit architecture Aviatrix Orchestrator is now available as an optional feature of the Aviatrix AVX Controller. New customers can launch the Aviatrix Secure Networking Platform AMI from AWS Marketplace to get access to this functionality. The existing customers can upgrade to the latest version of AVX software to use this feature. For more detail, visit the Aviatrix website. cstar: Spotify’s Cassandra orchestration tool is now open source! Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases
Read more
  • 0
  • 0
  • 6914
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-reinvent-day-3-lamba-layers-lambda-runtime-api-and-other-exciting-announcements
Melisha Dsouza
30 Nov 2018
4 min read
Save for later

Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!

Melisha Dsouza
30 Nov 2018
4 min read
The second last day of Amazon re:Invent 2018 ended on a high note. AWS announced two new features, Lambda Layers, and Lambda Runtime API, that claim to “make serverless development even easier”. In addition to this, they have also announced that Application Load Balancers will now invoke Lambda functions to serve HTTP(S) requests and Ruby Language support for Lambda. #1 Lambda Layers Lambda Layers allow developers to centrally manage code and data which is shared across multiple functions. Instead of packaging and deploying this shared code together with all the functions using it, developers can put common components in a ZIP file and upload it as a Lambda Layer.  These Layers can be used within an AWS account, shared between accounts, or shared publicly within the developer community. AWS  is also publishing a public layer which includes NumPy and SciPy. This layer is prebuilt and optimized to help users to carry out data processing and machine learning applications quickly. Developers can include additional files or data for their functions including binaries such as FFmpeg or ImageMagick, or dependencies, such as NumPy for Python. These layers are added to your function’s zip file when published. Layers can also be versioned to manage updates, which will make each version immutable. When a version is deleted or its permissions are revoked, a developer won’t be able to create new functions; however, functions that used it previously will continue to work. Lamba layers helps in making the function code smaller and more focused on what the application has to build. In addition to faster deployments, because less code must be packaged and uploaded, code dependencies can be reused. #2 Lambda Runtime API This is a simple interface to use any programming language, or a specific language version, for developing functions. Here, runtimes can be shared as layers, which allows developers to work with a  programming language of their choice when authoring Lambda functions. Developers using the Runtime API will have to bundle the same with their application artifact or as a Lambda layer that the application uses. When creating or updating a function, users can select a custom runtime. The function must include (in its code or in a layer) an executable file called bootstrap, that will be responsible for the communication between code and the Lambda environment. As of now, AWS has made the C++ and Rust open source runtimes available. The other open source runtimes that will possibly be available soon include: Erlang (Alert Logic) Elixir (Alert Logic) Cobol (Blu Age) Node.js (NodeSource N|Solid) PHP (Stackery) The Runtime API will depict how AWS will support new languages in Lambda. A notable feature of the C++ runtime is its simplicity and expressiveness of interpreted languages while maintaining a good performance and low memory footprint. The Rust runtime makes it easy to write highly performant Lambda functions in Rust. #3 Application Load Balancers to invoke Lambda functions to serve HTTP(S) requests This new functionality will enable users to access serverless applications from any HTTP client, including web browsers. Users can also route requests to different Lambda functions based on the requested content. Application Load Balancer will be used as a common HTTP endpoint to both simplify operations and monitor applications that use servers and serverless computing. #4 Ruby is now a supported language for AWS Lambda Developers can use Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default making it easy and quick for functions to directly interact with the AWS resources directly. Ruby on Lambda can be used either through the AWS Management Console or the AWS SAM CLI. This will ensure developers benefit from the reduced operational overhead, scalability, availability, and pay-per-use pricing of Lambda. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer  
Read more
  • 0
  • 0
  • 11992

article-image-dell-reveals-details-on-its-recent-security-breach
Savia Lobo
30 Nov 2018
2 min read
Save for later

Dell reveals details on its recent security breach

Savia Lobo
30 Nov 2018
2 min read
On Wednesday, Dell announced that it had discovered a security breach on November 9th. This breach tried to extract Dell’s customer information including names, email addresses, and hashed passwords. The company said, “Though it is possible some of this information was removed from Dell’s network, our investigations found no conclusive evidence that any was extracted. Additionally, Dell cybersecurity measures are in place to limit the impact of any potential exposure.” According to Dell’s press release, “Upon detection of the attempted extraction, Dell immediately implemented countermeasures and initiated an investigation. Dell also retained a digital forensics firm to conduct an independent investigation and has engaged law enforcement.” The company also did not go into detail about the hashing algorithms it uses. However, algorithms such as MD5 can be cracked within seconds to reveal the plaintext password. “Credit card and other sensitive customer information were not targeted. The incident did not impact any Dell products or services”, the company said. According to a customer review on Hacker News thread, “Dell ‘hashes’ all Dell.com customer account passwords prior to storing them in our database using a hashing algorithm that has been tested and validated by an expert third-party firm. This security measure limits the risk of customers’ passwords being revealed if a hashed version of their password were to ever be taken.” According to ZDNet, “Dell said it's still investigating the incident, but said the breach wasn't extensive, with the company's engineers detecting the intrusion on the same day it happened. A Dell spokesperson declined to give out a number of affected accounts, saying "it would be imprudent to publish potential numbers when there may be none." While resetting passwords is a safer option, the users should also keep a check on their card statements if they have saved some financial or legal information in their accounts. European Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPR A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News Cathay Pacific, a major Hong Kong based airlines, suffer data breach affecting 9.4 million passengers
Read more
  • 0
  • 0
  • 11407

article-image-sennheiser-opens-up-about-its-major-blunder-that-let-hackers-easily-carry-out-man-in-the-middle-attacks
Amrata Joshi
30 Nov 2018
4 min read
Save for later

Sennheiser opens up about its major blunder that let hackers easily carry out man-in-the-middle attacks

Amrata Joshi
30 Nov 2018
4 min read
Yesterday, Sennheiser, an audio device maker issued a fix for a major software blunder that let hackers  easily carry out man-in-the-middle attacks by cryptographically impersonating any website on the internet. What exactly happened? HeadSetup established an encrypted websocket with a browser to allow Sennheiser headphones and speaker phones to work smoothly with computers. A self-signed TLS certificate is installed in the central place that is reserved by the operating system for storing browser-trusted certificate authority roots. This location is called the Trusted Root CA certificate store in Windows and macOS Trust Store for Mac. This self-signed root certificate installed by version 7.3 of the HetSetup pro application gave rise to the vulnerability as it kept the private cryptographic key in such a way that it could be easily extracted. Since, the key was identical for all the installations of the software, hackers could easily use the root certificate for generating forged TLS certificates that impersonated any HTTPS website on the internet. Though the self-signed certificates were mere forgeries, they would still be accepted as authentic on computers as they store the poorly secured certificate root. Even the certificate pinning, a forgery defense can’t do anything to detect such hacks. According to Secorvo, a security firm, “the sensitive key was encrypted with the passphrase SennheiserCC. The key was then encrypted by a separate AES key and then base64 encoded. The passphrase was stored in plaintext in a configuration file. The encryption key was found by reverse-engineering the software binary.” Secorvo researcher André Domnick holds a control over a certificate authority which could be trusted by any computer that had installed the vulnerable Sennheiser app. Dominick said, “he tested his proof-of-concept only against Windows versions of HeadSetup but that he believes the design flaw is present in macOS versions as well.” A solution which didn’t prove to be succesful A later version of the Sennheiser app was released to solve this issue. This one came with a root certificate installed but it didn’t include the private key. It  seemed like a good solution until the update failed to remove the older root certificate. This was a major failure which caused anyone who had installed the older version, susceptible to the TLS forgeries. Also, uninstalling the app wasn’t enough as it didn’t remove the root certificates that made users vulnerable to the attack. For the computers that didn’t have the older root certificate installed, the newer version was still causing trouble as it installed a server certificate for the computer’s localhost, i.e. 127.0.0.1. Users have given a negative feedback as it was a major blunder. One of the users commented on ArsTechnica’s post, “This rises to the level of gross negligence and incompetence. There really should be some serious fines for these sorts of transgressions.” The company ended up violating CA/Browser Forum: Baseline Requirements to issue certificates which itself was a big problem. This latest threat opens up many questions including the most crucial ones ‘If there is still a safer way to get a HTTPS website communicate directly with a local device?’ Also, ‘if these companies are taking enough steps to protect the users from such frauds?’ All users that have installed  the app are advised that they should remove or block the installed root certificates. Microsoft has proactively removed the certificates so users need not take any further actions. However users have to manually remove the certificates from Macs and PCs. Read more about this news on ArsTechnica. Packt has put together a new cybersecurity bundle for Humble Bundle Blackberry is acquiring AI & cybersecurity startup, Cylance, to expand its next-gen endpoint solutions like its autonomous cars’ software IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support
Read more
  • 0
  • 0
  • 15223

article-image-typescript-3-2-released-with-configuration-inheritance-and-more
Prasad Ramesh
30 Nov 2018
7 min read
Save for later

TypeScript 3.2 released with configuration inheritance and more

Prasad Ramesh
30 Nov 2018
7 min read
TypeScript 3.2 was released yesterday. It is a language that brings static type-checking to JavaScript which enables developers to catch issues even before the code is run. TypeScript 3.2 includes the latest JavaScript features from ECMAScript standard. In addition to type-checking, it provides tooling in editors to jump to variable definitions, find the user of a function, and automate refactorings. You can install TypeScript 3.2 via NuGet or install via npm as follows: npm install -g typescript Now let’s look at the new features in TypeScript 3.2. strictBindCallApply TypeScript 3.2 comes with stricter checking for bind, call, and apply. In JavaScript, bind, call, and apply are methods on functions that allow actions like bind this and partially apply arguments. They also allow you to call functions with a different value for this, and call functions with an array for their arguments. Earlier, TypeScript didn’t have the power to model these functions. Demand to model these patterns in a type-safe way led the TypeScript developers to revisit this problem. There were two features that opened up the right abstractions to accurately type bind, call, and apply without any hard-coding: this parameter types from TypeScript 2.0 Modeling the parameter lists with tuple types from TypeScript 3.0 The combination of these two can ensure that the uses of bind, call, and apply are more strictly checked when we use a new flag called strictBindCallApply. When this new flag is used, the methods on callable objects are described by a new global type—CallableFunction. It declares stricter versions of the signatures for bind, call, and apply. Similarly, any methods on constructable (which are not callable) objects are described by a new global type called NewableFunction. A caveat of this new functionality is that bind, call, and apply can’t yet fully model generic functions or functions having overloads. Object spread on generic types JavaScript has a handy way of copying existing properties from an existing object into a new object called “spreads”. To spread an existing object into a new object, an element with three consecutive periods (...) can be defined. TypeScript does well in this area when it has enough information about the type. But it wouldn’t work with generics at all until now. A new concept in the type system called an “object spread type” could have been used. This would be a new type operator that looks like this: { ...T, ...U } to reflect the syntax of an object spread. If T and U are known, that type would flatten to some new object type. This approach was complex and required adding new rules to type relationships and inference. After exploring several different avenues, two conclusions arrived: Users were fine modeling the behavior with intersection types for most uses of spreads in JavaScript. For example, Foo & Bar. Object.assign: This a function that exhibits most of the behavior of spreading objects. It is already modeled using intersection types. There has been very little negative feedback around that. Intersections model the common cases and they’re relatively easy to reason about for both users and the type system. So now TypeScript 3.2 allows object spreads on generics and models them using intersections. Object rest on generic types Object rest patterns are kind of a dual to object spreads. It creates a new object that lacks some specified properties instead of creating a new object with some extra or overridden properties. Configuration inheritance via node_modules packages TypeScript has supported extending tsconfig.json files by using the extends field for a long time. This feature is useful to avoid duplicating configuration which can easily fall out of sync. It really works best when multiple projects are co-located in the same repository. This way each project can reference a common “base” tsconfig.json. But some projects are written and published as fully independent packages. Such projects don’t have a common file they can reference. So as a workaround, users could create a separate package and reference that. TypeScript 3.2 resolves tsconfig.jsons from node_modules. TypeScript will dive into node_modules packages when a bare path for the "extends" field in tsconfig.json is used. Diagnosing tsconfig.json with --showConfig The TypeScript compiler, tsc, now supports a new flag called --showConfig. On running tsc --showConfig, TypeScript will calculate the effective tsconfig.json and print it out. BigInt BigInts are a part of an upcoming ECMAScript proposal that allow modeling theoretically arbitrarily large integers. TypeScript 3.2 comes with type-checking for BigInts along with support for emitting BigInt literals when targeting esnext. Support for BigInt in TypeScript introduces a new primitive type called bigint. BigInt support is only available for the esnext target. Object.defineProperty declarations in JavaScript When writing in JavaScript files using allowJs, TypeScript 3.2 recognizes declarations that use Object.defineProperty. This means better completions, and stronger type-checking when enabling type-checking in JavaScript files. Improvements in error messages A few things have been added in TypeScript 3.2 that will make the language easier to use. Better missing property errors Better error spans in arrays and arrow functions Error on most-overlapping types in unions or “pick most overlappy type” Related spans on a typed this being shadowed A new warning message that says Did you forget a semicolon? on parenthesized expressions on the next line is added More specific messages are displayed when assigning to const/readonly bindings When extending complex types, more accurate messages are shown Relative module names are used in error messages Improved narrowing for tagged unions TypeScript now makes narrowing easier by relaxing rules for a discriminant property. The common properties of unions are now considered discriminants as long as they contain some singleton type and contain no generics. For example, a string literal, null, or undefined. Editing improvements The TypeScript project doesn’t have a compiler/type-checker. The core components of the compiler provide a cross-platform open-source language service that can power smart editor features. These features include go-to-definition, find-all-references, and a number of quick fixes and refactorings. Implicit any suggestions and “infer from usage” fixes noImplicitAny is a strict checking mode, and it helps ensure the code is as fully typed as possible. This also leads to a better editing experience. TypeScript 3.2 produces suggestions for most of the variables and parameters that would have been reported as having implicit any types. TypeScript provides a quick fix to automatically infer the types when an editor reports these suggestions. Other fixes There are two smaller quick fixes: A missing new is added when a constructor is called accidentally. An intermediate assertion is added to unknown when types are sufficiently unrelated. Improved formatting TypeScript 3.2 is smarter in formatting several different constructs. Breaking changes and deprecations TypeScript has moved more to generating DOM declarations in lib.d.ts by leveraging IDL files. Certain parameters no longer accept null or accept more specific types. Certain WebKit-specific properties have been deprecated. wheelDelta and friends have been removed as they are deprecated properties on WheelEvents. JSX resolution changes The logic for resolving JSX invocations has been unified with the logic for resolving function calls. This action has simplified the compiler codebase and improved certain use-cases. The further TypeScript releases will need Visual Studio 2017 or higher. For more details, visit the Microsoft Blog. Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new? Babel 7 released with Typescript and JSX fragment support
Read more
  • 0
  • 0
  • 18040
article-image-foundationdb-open-sources-foundationdb-document-layer-with-easy-scaling-no-sharding-consistency-and-more
Amrata Joshi
30 Nov 2018
6 min read
Save for later

FoundationDB open sources FoundationDB Document Layer with easy scaling, no sharding, consistency and more

Amrata Joshi
30 Nov 2018
6 min read
Yesterday, the team at FoundationDB announced that they are open sourcing the FoundationDB Document Layer, a document-oriented database. It extends the core functionality of the FoundationDB key-value store which stores all the persistent data. FoundationDB, a distributed database has been designed to handle large volumes of structured data across the clusters of commodity servers. It organizes the data in ordered key-value store format. The FoundationDB Document Layer is a stateless microserver which is backed by the scalable and transactional features of FoundationDB. It is released under an Apache v2 license. The Document layer improves the usage of document database with the help of  MongoDB API. It allows the use of the MongoDB API through existing MongoDB® client bindings. The Document Layer also implements a subset of the MongoDB API (v 3.0.0), which mainly focuses on CRUD (Create, Read, Update Delete) operations, indexes, and transactions. The FoundationDB Document Layer works with all official MongoDBdrivers. Key Features No sharding The Document Layer does not rely on a fixed shard key for distributing data. The data partitioning and rebalancing is managed by the key-value store, automatically. This feature has been inherited from FoundationDB which provides robust horizontal scalability and avoids client-level complexity. Easy scaling Document Layer’s instances are stateless and are configured only with the FoundationDB cluster, where the data gets stored. This stateless design of the Document Layer indicates that the instances of Document Layer can be kept behind a load balancer. So, queries can get easily handled from any client and for any document. Safe defaults The write operations on the Document Layer execute with full isolation and atomicity by default. This consistency makes it easier to correctly implement the applications that handle more than one simultaneous request. Improvements No locking for conflicting writes The Document Layer makes use of the concurrency of Key-Value Store which doesn’t let write operations to put locks on the database. In case of two operations concurrently attempting to modify the same field of the same document, one of them will fail. The client will again retry the failed operation. Most of the operations automatically get retried for a configurable number of times. Irrelevant commands removed Many database commands, including the commands related to sharding and replication have been removed as they are not applicable for the Document Layer. Multikey compound indexes FoundationDB Document Layer comes with Multikey compound indexes which allows the document to have array values for more than one of the indexed fields. MongoDB API Compatible The Document Layer is compatible with the MongoDB protocol as simple applications leveraging MongoDB can have a lift-and-shift migration to the Document Layer. In order to connect the application to the Document Layer, one has to use any existing MongoDB client. Saves time Instead of logging all the operations that take more than a certain amount of time, the Document Layer logs all operations that perform full collection scans on non-system collections. Custom projections The Document Layer supports custom projections of query results but it does not support the projection operators. The literal projection documents are used instead. It also does not support the $text or $where query operators. Non-multikey indexes If the indexed field on a document contains an array, then all the indexes allow multiple entries for that document. Auth/auth The FoundationDB Document Layer does not support MongoDB authentication, auditing, role-based access control, or transport encryption. sort parameter The sort parameter has been disabled in Document Layer. $push and $pull operators It doesn’t support the $position modifier to the $push operator. In Document Layer, the $sort modifier to the $push operator is only available if the $each modifier and the $slice modifier both have been used. listDatabases command The Document Layer comes with listDatabases which will always return a size of 1000000 bytes for a database that contains any data. Nested $elemMatch predicates In Document Layer, a query document of two or more nested $elemMatch predicates may behave differently. Future scope Currently, the Document Layer doesn’t allow the insertion or updation of a document which generates more than 1000 updates to a single index. In the future release, this limit might get configurable. The format of the information returned by the $explain operation is still not final but it may possibly get changed without prior warning in a future version. The Document Layer doesn’t support sessions yet so it could be expected in the future release. Future releases may emulate the Oplog for migration of the applications that directly examine it. The Document Layer does not support the deprecated BSON binary format or the MongoDB internal timestamp type. The Document Layer does not implement any geospatial query operators yet, they might be expected in a future release. Support for tailable cursors or capped collections could be expected soon. Currently, there is no support for the indexes which contains the entries for documents having indexed field which might get introduced soon. The FoundationDB Document Layer does not support the indexes which contains the entries for documents having indexed field. This feature might get implemented in the future. Currently there are no indexes, no data typing, and no query engine in Document Layer, which would be expected soon. Users are excited about this update but many are confused as to how would they shift from MongoDB as licensing could be an issue. Also, it still doesn’t support a list of features like the aggregation framework, indexes etc which could cause trouble to the users. Another concern is its -incompatibility with other API’s example DynamoDB. Another downfall is that it follows the layered approach which consumes more bandwidth unless the transaction is read-only. Few users are still having the bitter feeling of 2015 when FoundationDB was acquired by Apple. And they don’t trust the company since then. It would be interesting to see what happens in the next release. To know more about this news, check out the official post by FoundationDB. Introducing TigerGraph Cloud: A database as a service in the Cloud with AI and Machine Learning support Introducing EuclidesDB, a multi-model machine learning feature database ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features  
Read more
  • 0
  • 0
  • 8741

article-image-twitter-adopts-apache-kafka-as-their-pub-sub-system
Bhagyashree R
30 Nov 2018
3 min read
Save for later

Twitter adopts Apache Kafka as their Pub/Sub System

Bhagyashree R
30 Nov 2018
3 min read
Yesterday, Twitter shared that they are migrating to Apache Kafka from an in-house Pub/Sub system, EventBus which is built on top of Apache DistributedLog. The main reasons behind this adoption were Kafka’s lower latency, better resource savings, and a strong community support. Apache Kafka is an open-source distributed stream-processing software that provides a unified, high-throughput, low-latency platform for handling real-time data feeds. It has seen broad adoption from many big companies such as LinkedIn, Netflix, Uber and many more, making it the de facto real-time messaging system of choice in the industry. Why did Twitter decide to move to Apache Kafka? The Twitter team evaluated Kafka under similar workloads that are run on EventBus such as durable writes, tailing reads, catchup reads, and high fanout reads, for several months. They concluded these reasons for moving to Kafka: Lower latency This evaluation highlighted that Kafka provides significantly lower latency, regardless of the amount of throughput. Throughput was measured by the timestamp difference from the time the message was created to when the consumer read the message. Its lower latency could be attributed to these factors: In EventBus the serving and storage layers are decoupled, which introduces an additional hop. Kafka eliminates this as it has only one process handling both storage and request serving. The writes on fsync() calls in EventBus are explicitly blocked while in Kafka the OS is responsible to fsync() in the background. Kafka supports the zero-copy functionality, which greatly improves application performance by reducing the number of context switches between kernel and user mode. Better resource savings EventBus separates the serving and storage layers, which calls for additional hardware while Kafka uses only a single host to provide both. During the evaluation, the team saw a 68% resource savings for single consumer use cases, and 75% for multiple consumers use cases. In addition to this, the latest versions of Kafka support data replication, providing the durability guarantees. Strong community support As hundreds of developers are contributing to the Kafka project, which ensures regular bug fixes, improvements, and new features as opposed to only eight or so engineers working on EventBus/DistributedLog. Kafka comes with features that Twitter needed such as a streaming library, at-least-once HDFS pipeline, and exactly-once processing, which are not yet implemented in EventBus. Additionally, they will be able to get solutions to any problems they encounter on either the client or server side and get access to a better documentation. Check out the complete announcement on Twitter’s website. Apache Kafka 2.0.0 has just been released Getting started with the Confluent Platform: Apache Kafka for enterprise Working with Kafka Streams
Read more
  • 0
  • 0
  • 11680

article-image-cirq-0-4-0-released-for-writing-quantum-circuits
Prasad Ramesh
30 Nov 2018
3 min read
Save for later

Cirq 0.4.0 released for writing quantum circuits

Prasad Ramesh
30 Nov 2018
3 min read
Cirq is a Python library for writing quantum circuits and running them against quantum computers created by Google. Cirq 0.4.0 is now released and is available on GitHub. Changes in Cirq 0.4.0 themes The API is now more pythonic and more consistent with respect to breaking changes and refactoring. The simulation is faster. New functionality in Cirq 0.4.0 The following functions, parameters are added. cirq.Rx, cirq.Ry, and cirq.Rz cirq.XX, cirq.YY, cirq.ZZ, and cirq.MS related to the Mølmer–Sørensen gate cirq.Simulator cirq.SupportsApplyUnitary protocol is added to specify fast simulation methods cirq.Circuit.reachable_frontier_from and cirq.Circuit.findall_operations_between cirq.decompose sorted(qubits) and cirq.QubitOrder.DEFAULT.order_for(qubits) are now equivalent cirq.experiments.generate_supremacy_circuit_[...] dtype parameters are added to control the precision versus speed of simulations cirq.TrialResult helper methods (dirac_notation / bloch_vector / density_matrix) cirq.TOFFOLI and cirq.CCZ can be raised to powers Breaking changes in Cirq 0.4.0 Most of the gate classes have been standardized. They can now take an exponent argument and have a name which is of the form NamePowGate. For example, RotXGate is now XPowGate and it no longer takes rads, degs, or half_turns. The xmon gate set has now been merged into the common gate set. The capability marker classes have been replaced by magic method protocols. As an example, gates now just implement a _unitary_ method as opposed to inheriting from KnownMatrix. cirq.Extensions and cirq.PotentialImplementation are removed. Many decomposition classes and methods have been moved from cirq.google.* to cirq.*. Example: cirq.google.EjectFullW is now cirq.EjectPhasedPaulis. The classes and methods related to line placement are moved into cirq.google. Notable bug fixes A two-qubit gate decomposition will no longer produce a glut of single qubit gates. When multi-line entries are given, circuit diagrams stay aligned. They now include "same moment" indicators. The false-positives and false-negatives are fixed in cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent. Many repr methods returning code are fixed that assumed from cirq import * instead of import cirq. Example code now runs in both Python 2 and Python 3 without the need for transpilation. Notable dev changes The test files now import cirq instead of just specific modules. There is better testing and packaging of scripts. The package versions for Python 2 and Python 3 are no longer different. cirq.value_equality decorator is added. New cirq.testing methods and classes are added. Additions to contrib cirq.contrib.acquaintance: New utilities for defining permutation gates cirq.contrib.paulistring: Utilities for optimizing non-Clifford operations which are separated by Clifford operations cirq.contrib.tpu: Utilities for converting circuits into an executable form to be used on cloud TPUs. This requires TensorFlow. Google AdaNet, a TensorFlow-based AutoML framework Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet A new Model optimization Toolkit for TensorFlow can make models 3x faster
Read more
  • 0
  • 0
  • 11900
article-image-the-golang-team-has-started-working-on-go-2-proposals
Prasad Ramesh
30 Nov 2018
4 min read
Save for later

The Golang team has started working on Go 2 proposals

Prasad Ramesh
30 Nov 2018
4 min read
Yesterday, Google engineer Robert Griesemer published a blog post highlighting the outline of the next steps for Golang towards the Go 2 release. Google developer Russ Cox started the thought process behind Go 2 in his talk at GopherCon 2017. The talk was about the future of Go and pertaining to the changes that were talked about, the talk was informally called Go 2. A major change between the two versions is in the way design and changes are influenced. The first version only involved a small team but the second version will have much more participation from the community. The proposal process started in 2015, the Go core team will now work in the proposals for the second version of the programming language. The current status of Go 2 proposals As of November 2018, there are about 120 open issues on GitHub labeled Go 2 proposal. Most of them revolve around significant language or library changes often not compatible with Go 1. The ideas from the proposals will probably influence the language and libraries of the second version. Now there are millions of Go programmers and a large Go code body that needs to be brought together without an ecosystem split. Hence the changes done need to be less and carefully selected. To do this, the Go core team is implementing a proposal evaluation process for significant potential changes. The proposal evaluation process The purpose of the evaluation process is to collect feedback on a small number of select proposals to make a final decision. This process runs in parallel to a release cycle and has five steps. Proposal selection: The Go core team selects a few Go 2 proposals that seem good to them for acceptance. Proposal feedback: After selecting, the Go team announces the selected proposals and collects feedback from the community. This gives the large community an opportunity to make suggestions or express concerns. Implementation: The proposals are implemented based on the feedback received. The goal is to have significant changes ready to submit on the first day up an upcoming release. Implementation feedback: The Go team and community have a chance to experiment with the new features during the development cycle. This helps in getting further feedback. Final launch decision: The Go team makes the final decision on shipping each change at the end of the three-month development cycle. At this time, there is an opportunity to consider if the change delivers the expected benefits or has created any unexpected costs. When shipped, the changes become a part of the Go language. Proposal selection process and the selected proposals For a proposal to be selected, the minimum criteria are that it should: address an important issue for a large number of users have a minimal impact on other users is drafted with a clear and well-understood solution For trials a select few proposals will be implemented that are backward compatible and hence are less likely to break existing functionality. The proposals are: General Unicode identifiers based on Unicode TR31 which will allow using non-Western alphabets. Adding binary integer literals and support for_ in number literals. Not a very big problem solving change, but this brings Go up to par with other languages in this aspect. Permit signed integers as shift counts. This will clean up the code and get shift expressions better in sync with index expressions and built-in functions like cap and len. The Go team has now started with the proposal evaluation process and now the community can provide feedback. Proposals with clear, positive feedback will be taken ahead as they aim to implement changes by  February 1, 2019. The development cycle is Feb-May 2019 and the chosen features will be implemented as per the outlined process. For more details, you can visit the Go Blog. Golang just celebrated its ninth anniversary GoCity: Turn your Golang program into a 3D city Golang plans to add a core implementation of an internal language server protocol
Read more
  • 0
  • 0
  • 13095

article-image-meet-jfrog-xray-a-binary-analysis-tool-for-performing-security-scans-and-dependency-analyses
Sugandha Lahoti
29 Nov 2018
2 min read
Save for later

Meet JFrog Xray, a binary analysis tool for performing security scans and dependency analyses

Sugandha Lahoti
29 Nov 2018
2 min read
Last month, JFrog a DevOps based artifact management platform bagged a $165 million Series D funding. Now they are announcing JFrog Xray, a binary analysis tool for performing recursive security scans and dependency analyses on all standard software package and container types. It performs a multilayer analysis of containers and software artifacts for vulnerabilities, license compliance, and quality assurance. JFrog Xray is available as a pure cloud subscription, making Xray the only cloud utility integrated with a universal artifact binary repository. Xray Cloud is available for customers on Amazon Web Services, Google Cloud Platform and soon on Azure. Xray’s database can also plug into other data sources, giving customers maximum flexibility and coverage. It is available in two versions. First, an on-Prem version where users can install, manage and maintain on their own hardware or host in the cloud themselves. Second, the cloud version where JFrog manages, maintains and scales the infrastructure, and provides automated server backups with free updates and guaranteed uptime. Features of JFrog Xray: Artifact analysis for all major package formats across the CI/CD pipeline Deep recursive scanning to provide insight into components graph and show the impact that an issue has on software artifacts Native Artifactory integration by enriching artifacts with metadata to protect software from potential threats Fully automated protection for development, build, and production phases through IDE and CI/CD integration and REST API 24/7 R&D level support Currently, JFrog Xray is being used by companies such as Slack, Workday, and AT&T and has helped its customers avoid nearly 57,000 unique software package vulnerabilities. “The ability to provide scalable security solutions in a hybrid cloud model has definitely become a requirement in the enterprise,” said Dror Bereznitsky, VP of Product Management for JFrog. “We’re proud that Xray is uniquely providing not only reliable scanning and compliance management, but also delivering these solutions at a massive scale across leading cloud providers to give customers maximum flexibility.” More information on Xray Cloud is available on JFrog official website. JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding. Packt has put together a new cybersecurity bundle for Humble Bundle. Data Theorem launches two automated API security analysis solutions – API Discover and API Inspect
Read more
  • 0
  • 0
  • 13875
Modal Close icon
Modal Close icon