Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-amazon-announces-improved-vpc-networking-for-aws-lambda-functions
Amrata Joshi
04 Sep 2019
3 min read
Save for later

Amazon announces improved VPC networking for AWS Lambda functions

Amrata Joshi
04 Sep 2019
3 min read
Yesterday, the team at Amazon announced improved VPC (Virtual Private Cloud) networking for AWS Lambda functions. It is a major improvement on how AWS Lambda function will work with Amazon VPC networks.  In case a Lambda function is not configured to connect to your VPCs then the function can access anything available on the public internet including other AWS services, HTTPS endpoints for APIs, or endpoints and services outside AWS. So, the function has no way to connect to your private resources that are inside your VPC. When the Lambda function is configured to connect to your own VPC, it creates an elastic network interface within the VPC and does a cross-account attachment. Image Source: Amazon These Lambda functions run inside the Lambda service’s VPC but they can only access resources over the network with the help of your VPC. But in this case, the user still won’t have direct network access to the execution environment where the functions run. What has changed in the new model? AWS Hyperplane for providing NAT (Network Address Translation) capabilities  The team is using AWS Hyperplane, the Network Function Virtualization platform that is used for Network Load Balancer and NAT Gateway. It also has supported inter-VPC connectivity for AWS PrivateLink. With the help of Hyperplane the team will provide NAT capabilities from the Lambda VPC to customer VPCs. Network interfaces within VPC are mapped to the Hyperplane ENI The Hyperplane ENI (Elastic Network Interfaces), a network resource controlled by the Lambda service, allows multiple execution environments to securely access resources within the VPCs in your account. So, in the previous model, the network interfaces in your VPC were directly mapped to Lambda execution environments. But in this case, the network interfaces within your VPC are mapped to the Hyperplane ENI. Image Source: Amazon How is Hyperplane useful? To reduce latency When a function is invoked, the execution environment now uses the pre-created network interface and establishes a network tunnel to it which reduces the latency. To reuse network interface cross functions Each of the unique security group:subnet combination across functions in your account needs a distinct network interface. If such a combination is shared across multiple functions in your account, it is now possible to reuse the same network interface across functions. What remains unchanged? AWS Lambda functions will still need the IAM permissions for creating and deleting network interfaces in your VPC. Users can still control the subnet and security group configurations of the network interfaces.  Users still need to use a NAT device(for example VPC NAT Gateway) for giving a function internet access or for using VPC endpoints to connect to services outside of their VPC. The types of resources that your functions can access within the VPCs still remain the same. The official post reads, “These changes in how we connect with your VPCs improve the performance and scale for your Lambda functions. They enable you to harness the full power of serverless architectures.” To know more about this news, check out the official post. What’s new in cloud & networking this week? Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models  
Read more
  • 0
  • 0
  • 20700

article-image-silicon-interconnect-fabric-replace-printed-circuit-boards-new-ucla-research
Sugandha Lahoti
26 Sep 2019
4 min read
Save for later

Silicon-Interconnect Fabric is soon on its way to replace Printed Circuit Boards, new UCLA research claims

Sugandha Lahoti
26 Sep 2019
4 min read
Researchers from UCLA claim in a news study that printed circuit board could be replaced with what they call silicon-interconnect fabric or Si-IF. This fabric allows bare chips to be connected directly to wiring on a separate piece of silicon. The researchers are Puneet Gupta and Subramanian Iyer, members of the electrical engineering department at the University of California at Los Angeles. How can Silicon-Interconnect Fabric be useful In a report published on IEEE Spectrum on Tuesday, the researchers suggest that printed circuit boards can be replaced with silicon which will especially help in building smaller, lighter-weight systems for wearables and other size-constrained gadgets. They write, “Unlike connections on a printed circuit board, the wiring between chips on our fabric is just as small as wiring within a chip. Many more chip-to-chip connections are thus possible, and those connections are able to transmit data faster while using less energy.” Si-IF can also be useful for building “powerful high-performance computers that would pack dozens of servers’ worth of computing capability onto a dinner-plate-size wafer of silicon.” The silicon-interconnect fabric could possibly dissolute the system-on-chip (SoC) into integrated collections of dielets, or chiplets. The researchers say, “It’s an excellent path toward the dissolution of the (relatively) big, complicated, and difficult-to-manufacture systems-on-chips that currently run everything from smartphones to supercomputers. In place of SoCs, system designers could use a conglomeration of smaller, simpler-to-design, and easier-to-manufacture chiplets tightly interconnected on an Si-IF.” The researchers linked up chiplets on a silicon-interconnect fabric built on a 100-millimeter-wide wafer. Unlike chips on a printed circuit board, they can be placed a mere 100 micrometers apart, speeding signals and reducing energy consumption. For evaluating the size, the researchers compared an Internet of Things system based on an Arm microcontroller. Using Si-IF shrinks the size of the board by 70 percent but also reduces its weight from 20 grams to 8 grams. Challenges associated with Silicon-Interconnect Fabric Even though large progress has been made on Si-IF integration over the past few years, the researchers point out that much remains to be done. For instance, there is a need of having a commercially viable, high-yield Si-IF manufacturing process. You also need mechanisms to test bare chiplets as well as unpopulated Si-IFs. New heat sinks or other thermal-dissipation strategies will also be required to take advantage of silicon’s good thermal conductivity. In addition, the chassis, mounts, connectors, and cabling for silicon wafers need to be engineered to enable complete systems. There is also the need to make several changes to design methodology and to consider system reliability. People agreed that the research looked promising. However, some felt that replacing PCBs with Si-IF sounded overachieving, to begin with. A comment on Hacker News reads, “I agree this looks promising, though I'm not an expert in this field. But the title is a bit, well, overpromising or broad. I don't think we'll replace traditional motherboards anytime soon (except maybe in smartphones?). Rather, it will be an incremental progress.” Others were also not convinced. A hacker news user pointed out several benefits of PCBs. “ PCBs are cheaper to manufacture than silicon wafers. PCBs can be arbitrarily created and adjusted with little overhead cost (time and money). PCBs can be re-worked if a small hardware fault(s) is found. PCBs can carry large amount of power. PCBs can help absorb heat away from some components. PCBs have a small amount of flexibility, allowing them to absorb shock much easier PCBs can be cut in such a way as to allow for mounting holes or be in relatively arbitrary shapes. PCBs can be designed to protect some components from static damage.” You can read the full research on IEEE. Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more. IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Samsung develops Key-value SSD prototype, paving the way for optimizing network storage efficiency and extending server CPU processing power MIT researchers built a 16-bit RISC-V compliant microprocessor from carbon nanotubes
Read more
  • 0
  • 0
  • 20698

article-image-brave-privacy-browser-has-a-backdoor-to-remotely-inject-headers-in-http-requests-hackernews
Melisha Dsouza
11 Feb 2019
3 min read
Save for later

Brave Privacy Browser has a ‘backdoor’ to remotely inject headers in HTTP requests: HackerNews

Melisha Dsouza
11 Feb 2019
3 min read
Brave, the open source privacy- focussed browser, has allegedly introduced a ‘backdoor’ to remotely inject headers in HTTP requests that may track users, say users on HackerNews. Users on Twitter and HackerNews have expressed their concerns over the new update on custom HTTP headers added by the Brave team: https://twitter.com/WithinRafael/status/1094712882867011585 Source: HackerNews A user on Reddit has explained this move as “not tracking anything, they just send the word "Brave" to the website whenever you visit certain partners of theirs. So for instance visiting coinbase.com sends an "X-Brave-Partner" custom header to coinbase.com.” Brendan Eich, from the Brave team, has replied back to this allegation saying that the ‘Update is not a "backdoor" in any event and is a custom header instead.’  He says the update is about custom HTTP headers that Brave sends to its partners, with fixed header values. There is no tracking hazard in the new update. He further stresses on the fact that Brave blocks 3rd party cookies and storage and 3rd party fingerprinting along with HSTS supercookies; thus assuring users on preserving their privacy. “I find it silly to assume we will "heel turn" so obviously and track our users. C'mon! We defined our model so we can't cheat without losing lead users who would see through it. That requires seeing clearly things like the difference between tracking and script blocking or custom header sending, though.” Users have also posted on Hacker News that the Brave browser Tracking Protection feature does not block tracking scripts from hostnames associated with Facebook and Twitter. The tracking_protection_service.h file contains a comment informing that a tracking protection white_list variable was created as a "Temporary hack which matches both browser-laptop and Android code". Bleepingcomputer also reports that this whitelist variable is associated with code in the tracking_protection_service.cc file that adds various Facebook and Twitter hostnames to the whitelist variable so that they are not blocked by Brave's Tracking Protection feature. In response to this comment, Brave says that the issue that was opened on September 8th, 2018 and developers decided to whitelist tracking scripts from Facebook and Twitter because blocking them would “affect the functionality of many sites” including Facebook logins. You can head over to Brendan’s Reddit thread for more insights on this update. Brave introduces Brave Ads that share 70% revenue with users for viewing ads Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart Otter Browser’s first stable release, v1.0.01 is out
Read more
  • 0
  • 0
  • 20658

article-image-mozilla-releases-webthings-gateway-0-9-experimental-builds-targeting-turris-omnia-and-raspberry-pi-4
Bhagyashree R
29 Jul 2019
4 min read
Save for later

Mozilla releases WebThings Gateway 0.9 experimental builds targeting Turris Omnia and Raspberry Pi 4

Bhagyashree R
29 Jul 2019
4 min read
In April, the Mozilla IoT team relaunched Project Things as “WebThings” with its two components: WebThings Gateway and WebThings Framework. WebThings is an open-source implementation of W3C’s Web of Things standard for monitoring and controlling connected devices on the web. On Friday, the team announced the release of WebThings Gateway 0.9 and the availability of its first experimental builds for Turris Omnia. This release is also the first version of WebThings Gateway to support the new Raspberry Pi 4. Along with that, they have also released WebThings framework 0.12. W3C’s Web of Things standard The Internet of Things (IoT) has a lot of potential, but it suffers from a lack of interoperability across platforms. The Web of Things aims to solve this by building a decentralized IoT using the web as its application layer. It provides mechanisms to formally describe IoT interfaces to enable IoT devices and services interact with each other, independent of their underlying implementation. To connect real-world things to the web, each thing is assigned a URI to make them linkable and discoverable. It is currently under the process of standardization at the W3C. Updates in WebThings Gateway 0.9 and WebThings Framework 0.12 WebThings Gateway is a software distribution for smart home gateways that allows users to monitor and control their smart home devices over the web, without a middleman. Among the protocols it supports are HomeKit, ZigBee, Thread, MQTT, Weave, AMQP. Among the languages it supports are JS (Node.js), Python, Rust, Java, and C++. The experimental builds of WebThings Gateway 0.9 are based on OpenWrt, a Linux operating system for embedded devices. They come with a new first-time setup for configuring the gateway as a router and Wi-Fi access point itself instead of connecting to an existing Wi-Fi network. Source: Mozilla However, Mozilla noted that the router configurations are still pretty basic and are not yet ready to replace your existing wireless router. “This is just our first step along the path to creating a full software distribution for wireless routers,” reads the announcement. We can expect support for other wireless routers and router developer brands in the near future. This version ships with a new type of add-on called notifier add-ons. In previous gateway versions, push notifications were the only way for notifying users of any event. But, this mechanism is not supported by all browsers and is also not considered to be the most convenient way of notifying users. As a solution, Mozilla came up with notifier add-ons using which you can create a set of outlets. These outlets will act as an output for a defined rule. For instance, you can set up a rule to get an SMS or an email whenever any motion is detected in your home. You can also configure a notifier with a title, a message, and a priority level. Source: Mozilla The WebThings Gateway 0.9 and WebThings Framework 0.12 bring a few changes to Thing Descriptions as well to make it more aligned with the latest W3C drafts. A Thing Description provides a vocabulary to describe physical devices connected to the web in a machine-readable format with a default JSON encoding. The “name” is now changed to “title” and there are experimental new properties of the Thing Descriptions exposed by the gateway. To know more check out Mozilla’s official announcement. To get started, head over to its GitHub repository. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices
Read more
  • 0
  • 0
  • 20656

Matthew Emerick
31 Aug 2020
5 min read
Save for later

Ionic + Angular: Powering the App store and the web from Angular Blog - Medium

Matthew Emerick
31 Aug 2020
5 min read
Did you know Ionic and Angular power roughly 10% of the apps on iOS and almost 20% of apps on Android? Let’s repeat that: Angular powers a significant chunk of apps in the app stores. Why is it helpful to know this? Well, if you were on the fence about what technology choice you should make for your next app, it should be reassuring to know that apps powered by web technology are thriving in the app store. Let’s explore how we came to that conclusion and why it matters. First, for a number of reasons, users visit these stores and download apps that help them in their day-to-day lives. Users are searching for ToDo apps (who doesn’t love a good ToDo app), banking apps, work-specific apps and so much more. A good portion of these apps are built using web technologies such as Ionic and Angular. But enough talk, let’s look at some numbers to back this up. The Data If you’ve never heard of Appfigures, it’s an analytics tool that monitors and offers insights on more than 150,000 apps. Appfigures provides some great insight into what kind of tools developers are using to build their apps. Like what’s the latest messaging, mapping, or development SDK? That last one is the most important metric we want to explore. Let’s look at what the top development SDKs are for app development: Data from https://appfigures.com/top-sdks/development/apps Woah, roughly 10% of the apps on iOS and almost 20% of apps on Android use Ionic and Angular. This is huge. The data here is gathered by analyzing the various SDKs used in apps on the app stores. In these charts we see some options that are to be expected like Swift and Kotlin. But Ionic and Angular are still highly present. We could even include Cordova, since many Ionic apps are Cordova-based, and these stats would increase even more. But we’ll keep to the data that we know for sure. Given the number of apps out there, even 10% and 20% are a significant size. If you ignore Appfigures, you can get a sense of how many Ionic/Angular apps are there by just searching for “com.ionicframework”, which is our starting package ID (also, people should really change this). Here’s a link if you’re interested. Why Angular for mobile? Developers are using Ionic and Angular power a good chunk of the app stores. With everything Angular has to offer in terms of developer experience, tooling for fast apps, and its ecosystem of third-party libraries (like Ionic), it’s no wonder developers choose it as their framework of choice. From solo devs, to small shops, to large organizations like the UK’s National Health Service, Boehringer Ingelheim and BlueCross Blue Shield, these organizations have selected Angular and Ionic for their tech stack, and you should feel confident to do so as well. Web vs. App Stores If Ionic and Angular are based on web technologies, why even target the app stores at all? With Progressive Web Apps gaining traction and the web becoming a more capable platform, why not just deploy to the web and skip the app stores? Well it turns out that the app stores provide a lot of value that products need. Features like Push Notifications, File System API, etc are starting to come to the web, but they are still not fully available in every browser. Building a hybrid app with Ionic and Angular can allow developers to use these features and gracefully fallback when these APIs are not available. Discoverability is also another big factor here. While we can search for anything on the web, having users discover your app can be challenging. With the app stores they regularly promote new apps and highly rated apps as well. This can make the difference between a successful product and one that fails. The Best of Both Worlds The web is still an important step to shipping a great app. But when developers want to target different platforms, having the right libraries in place can make all the difference. For instance, if you want to build a fast and performant web app, Angular is an excellent choice and is statistically the choice many developers make. On the other hand, if you want to bring that app experience to a mobile device, with full access to every native feature and offline capabilities, then a hybrid mobile app using Angular plus a mobile SDK like Ionic is the way to go. Either way, your investment in Angular will serve you well. And you’ll be in good company, with millions of devs and nearly a million apps right alongside you. Ionic + Angular: Powering the App store and the web was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 20647

article-image-oracle-announces-a-new-pricing-structure-for-java
Pavan Ramchandani
25 Jun 2018
2 min read
Save for later

Oracle announces a new pricing structure for Java

Pavan Ramchandani
25 Jun 2018
2 min read
Oracle has announced a major shift in the pricing structure for various offerings of Java. Currently, there are many offerings for the core Java language in terms of Java binaries, Java for desktops, commercial offering, among others. Java binaries are offered free for developers under the General Public License 2 (GPL 2). Java SE is offered, at an entry-level support, for $2.50/desktop for a month, or $25/CPU for a month. Under the free offering for developers, Oracle will provide OpenJDK builds (the backend that keeps Java running on any system) under the GPL + CPE license. To make the offering more flexible, Oracle is working on Oracle JDK which would support Java SE 11 (the LTS release) set to launch in September 2018. With Oracle JDK, Oracle is trying to make the offering of Java binaries simpler for the developers as it would be royalty-free for open-source development, testing, etc. For the commercial license, Oracle will be offering the Java SE Subscriptions combined with the technical support and access to all the updates that will follow the Java SE release cycle. Apart from the commercial offering, Oracle also has varied pricing for offerings through Oracle Academy. With the new Java SE Subscription, comes with a feature called Java Advanced Management Console. This feature will enable the license holders to identify, manage, and tune Java SE use in systems used across the enterprise. It also includes Oracle Premier Support, to enable support for Java across current and previous versions. Oracle, in their press release, mentioned the update in the subscription model is inspired by how Linux provides support for updates in the platform. It mentions "the subscription model for updates and support has been long established in the Linux ecosystem". By this new subscription model, Oracle ensures that anyone requiring the additional level of support for Oracle products can receive it with flexible pricing and still keep a balance between its open source and commercial offerings. For all the details on these subscriptions, you can visit the Java SE subscription FAQs. Oracle reveals issues in Object Serialization. Plans to drop it from core Java. 5 Things you need to know about Java 10 Oracle Apex 18.1 is here!
Read more
  • 0
  • 0
  • 20644
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-patreon-speaks-out-against-the-protests-over-its-banning-sargon-of-akkad-for-violating-its-rules-on-hate-speech
Natasha Mathur
19 Dec 2018
3 min read
Save for later

Patreon speaks out against the protests over its banning Sargon of Akkad for violating its rules on hate speech

Natasha Mathur
19 Dec 2018
3 min read
Patreon, a popular crowdfunding platform published a post yesterday in defense of its removal of Sargon of Akkad or Carl Benjamin, an English YouTuber famous for his anti-feminist content, last week, over the concerns of him violating its policies on hate speech. Patreon has been receiving backlash ever since from the users and patrons of the website who are calling for a boycott. “Patreon does not and will not condone hate speech in any of its forms. We stand by our policies against hate speech. We believe it’s essential for Patreon to have strong policies against hate speech to build a safe community for our creators and their patrons”, says the Patreon team. Patreon mentioned that it reviews the creations posted by the content creators on other platforms that are funded via Patreon. Since Benjamin is quite popular for his collaborations with other creators, Patreon’s community guidelines, which strictly prohibits hate speech also get applied to those collaborations. According to Patreon’s community guidelines, “Hate speech includes serious attacks, or even negative generalizations, of people based on their race [and] sexual orientation.” Benjamin in one of his interviews on another YouTuber’s channel used racial slurs linked with “negative generalizations of behavior” quite contrasting to how people of those races actually act, to insult others. Apart from using racial slurs, he also used sexual orientation related slurs which violates Patreon’s community guidelines. However, a lot of people are not happy with Patreon’s decision. For instance, Sam Harris, a popular American author, podcast host, and neuroscientist, who had one of the top-grossing accounts (with nearly 9,000 paying patrons at the end of November) on Patreon deleted his account earlier this week, accusing the platform of “political bias”. He wrote “the crowdfunding site Patreon has banned several prominent content creators from its platform. While the company insists that each was in violation of its terms of service, these recent expulsions seem more readily explained by political bias. I consider it no longer tenable to expose any part of my podcast funding to the whims of Patreon’s ‘Trust and Safety” committee’”.     https://twitter.com/SamHarrisOrg/status/1074504882210562048 Apart from banning Carl Benjamin, Patreon also banned Milo Yiannopoulos, a British public speaker and YouTuber with over 839,286 subscribers earlier this month over his association with the Proud Boys, which Patreon has classified as a hate group. https://twitter.com/Patreon/status/1070446085787668480 James Allsup, an alt-right political commentator, and associate of Yiannopoulus', was also banned from Patreon last month for their association with hate groups. Amidst this controversy, some of the top Patreon creators such as Jordan Peterson, a popular Canadian clinical psychologist whose YouTube channel has over 1.6 M subscribers and Dave Rubin, an American libertarian political commentator announced their plans of starting an alternative to Patreon, earlier this week. Peterson said that the new platform will work on a subscriber model similar to Patreon’s, only with few additional features. https://www.youtube.com/watch?v=GWz1RDVoqw4 “We understand some people don’t believe in the concept of hate speech and don’t agree with Patreon removing creators on the grounds of violating our Community Guidelines for using hate speech. We have a different view,” says the Patreon team. Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media Twitter takes action towards dehumanizing speech with its new policy How IRA hacked American democracy using social media and meme warfare to promote disinformation and polarization: A new report to Senate Intelligence Committee
Read more
  • 0
  • 0
  • 20618

article-image-mongodb-switches-to-server-side-public-license-sspl-to-prevent-cloud-providers-from-exploiting-its-open-source-code
Natasha Mathur
17 Oct 2018
3 min read
Save for later

MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code

Natasha Mathur
17 Oct 2018
3 min read
MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code MongoDB, a leading free, and open source general purpose database platform, announced yesterday that it has issued a new software license, the Server Side Public License (SSPL), for the MongoDB community server. This new license will be applied to all the new releases and versions of the MongoDB community server, including the patch fixes for prior versions. “The market is increasingly consuming software as a service, creating an incredible opportunity to foster a new wave of great open source server-side software. Unfortunately, once an open source project becomes interesting, it is too easy for cloud vendors who have not developed the software to capture all of the value while contributing little back to the community,” mentioned Eliot Horowitz, CTO, and co-founder, MongoDB. Earlier, MongoDB was licensed under the GNU AGPLv3 (AGPL). This license allowed the companies to modify and run MongoDB as a publicly available service but only if they open source their software or acquire a commercial license from MongoDB. However, as the popularity of MongoDB grew, some cloud providers started taking MongoDB’s open-source code to offer a hosted commercial version of its database to their users without abiding by the open-source rules. This is why MongoDB decided to switch to the SSPL. “We have greatly contributed to, and benefited from, open source, and are in a unique position to lead on an issue impacting many organizations. We hope this new license will help inspire more projects and protect open source innovation”, said Horowitz. The SSPL is not very different from the AGPL license. Only that SSPL clearly specified the condition for providing open source software as a service. In fact, the new license offers the same level of freedom as the AGPL to the open source community. Companies still have the freedom to use, review, modify and redistribute the software but to use MongoDB as a service, they need to open source the software that they’re using. This is not applicable to customers who have purchased a commercial license from MongoDB. “We are big believers in open source. It leads to more valuable, robust and secure software. However, it is important that open source licenses evolve to keep pace with the changes in our industry. With the added protection of the SSPL, we can continue to invest in R&D and further drive innovation and value for the community”, mentioned Dev Ittycheria, President & CEO, MongoDB. For more information, check out the official MongoDB announcement. MongoDB acquires mLab to transform the global cloud database market and scale MongoDB Atlas MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial] MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more
Read more
  • 0
  • 0
  • 20576

article-image-watermelon-db-a-new-relational-database-to-make-your-react-and-react-native-apps-highly-scalable
Bhagyashree R
11 Sep 2018
2 min read
Save for later

Introducing Watermelon DB: A new relational database to make your React and React Native apps highly scalable

Bhagyashree R
11 Sep 2018
2 min read
Now you can store your data in Watermelon! Yesterday, Nozbe released Watermelon DB v0.6.1-1, a new addition to the database world. It aims to help you build powerful React and React Native apps that scale to large number of records and remain fast. Watermelon architecture is database-agnostic, making it cross-platform. It is a high-level layer for dealing with data, but can be plugged in to any underlying database, depending on platform needs. Why choose Watermelon DB? Watermelon DB is optimized for building React and React Native complex applications. Following are the factors that help in ensuring high speed of applications: It makes your application highly scalable by using lazy loading, which means Watermelon DB loads data only when it is requested. Most queries resolve in less than 1ms, even with 10,000 records, as all querying is done on SQLite database on a separate thread. You can launch your app instantly irrespective of how much data you have. It is supported on iOS, Android, and the web. It is statically typed keeping Flow, a static type checker for JavaScript, in mind. It is fast, asynchronous, multi-threaded, and highly cached. It is designed to be used with a synchronization engine to keep the local database up to date with a remote database. Currently, Watermelon DB is in active development and cannot be used in production. Their roadmap states that, migrations will soon be added to allow the production use of Watermelon DB. Schema migrations is the mechanism by which you can add new tables and columns to the database in a backward-compatible way. To know how you can install it and to try few examples, check out Watermelon DB on GitHub. React Native 0.57 coming soon with new iOS WebViews What’s in the upcoming SQLite 3.25.0 release: windows functions, better query optimizer and more React 16.5.0 is now out with a new package for scheduling, support for DevTools, and more!
Read more
  • 0
  • 0
  • 20574

article-image-you-can-now-use-webassembly-from-net-with-wasmtime
Vincy Davis
05 Dec 2019
3 min read
Save for later

You can now use WebAssembly from .NET with Wasmtime!

Vincy Davis
05 Dec 2019
3 min read
Two months ago, ASP.NET Core 3.0 was released with an updated version of the Blazor framework. This framework allows the building of interactive client-side web UI with .NET. Yesterday, Peter Huene, a staff research engineer at Mozilla shared his experience of using Wasmtime with .NET. He affirms that using this format will enable developers to programmatically load and execute WebAssembly code directly from their .NET programs. Key benefits of using WebAssembly from .NET with Wasmtime Share more code across platforms Although .NET Core enables cross-platform use, developers find it difficult to use a native library as .Net Core requires native interop and a platform-specific build for each supported platform. However, if the native library is compiled to WebAssembly, then the same WebAssembly module can be used across many different platforms and programming environments, including .NET. Thus a more simplified distribution of the library and applications will allow developers to share more codes across platforms. Securely isolate untrusted code According to Huene, “The .NET Framework attempted to sandbox untrusted code with technologies such as Code Access Security and Application Domains, but ultimately these failed to properly isolate untrusted code.” This resulted in Microsoft deprecating its use for sandboxing and removing it from .NET Core. Huene asserts that since WebAssembly is designed for the web, its module will enable users to call the external explicitly imported function from a host environment and will also give access to only a region of memory given to it by the host. With WebAssembly, users can also leverage this design to sandbox code in a .NET program. Improved interoperability with interface types In August this year, WebAssembly’s interface types permitted users to run WebAssembly with many programming languages like Python, Ruby, and Rust. This interoperability reduced the amount of glue code which was necessary for passing complex types between the hosting application and a WebAssembly module. According to Huene, if Wasmtime implements official support for interface types for .NET API in the future, it will enable a seamless exchange of complex types between WebAssembly and .NET. Users have liked the approach of using WebAssembly from .NET with Wasmtime. https://twitter.com/mattferderer/status/1202276545840197633 https://twitter.com/seangwright/status/1202488332011347968 To know how Peter Huene used WebAssembly from .NET, check out his demonstrations on the Mozilla Hacks blog. Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist .NET Framework API Porting Project concludes with .NET Core 3.0 Wasmer’s first Postgres extension to run WebAssembly is here! Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module Introducing SwiftWasm, a tool for compiling Swift to WebAssembly
Read more
  • 0
  • 0
  • 20570
article-image-nyu-and-aws-introduce-deep-graph-library-dgl-a-python-package-to-build-neural-network-graphs
Prasad Ramesh
13 Dec 2018
2 min read
Save for later

NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs

Prasad Ramesh
13 Dec 2018
2 min read
Introducing a new library called Deep Graph Library (DGL) developed by the NYU & AWS teams, Shanghai. DGL is a package built on Python to simplify deep learning on graph, atop of existing deep learning frameworks. DGL is essentially a Python package which serves as an interface between any existing tensor libraries and data that is expressed as graphs. It helps in easy implementation of graph neural networks such as Graph Convolution Networks, TreeLSTM and others. It also maintains high computation efficiency while doing this. This new Python library is made in an effort to make graph implementations in deep learning simpler. According to the results they state, the improvement on some models is as high as 10 times and has better accuracy in some cases. Check out the results on GitHub. Their website states: “We are keen to bring graphs closer to deep learning researchers. We want to make it easy to implement graph neural networks model family. We also want to make the combination of graph based modules and tensor based modules (PyTorch or MXNet) as smooth as possible.” As of now, DGL supports PyTorch v1.0. The autobatching is up to 4x faster than DyNet. DGL is tested on Ubuntu 16.04, macOS X, and Windows 10 and will work on any newer versions of these OSes. Python 3.5 or later is required while Python 3.4 or older is not tested. Support for Python 2 is in the works. Installing it is as same as any other Python package. With pip: pip install dgl And with conda: conda install -c dglteam dgl https://twitter.com/aussetg/status/1072897828677144582 DGL is currently in the beta stage, licensed under Apache 2.0, and they have a Twitter page. You can check out DGL at their website. UK researchers have developed a new PyTorch framework for preserving privacy in deep learning OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners Deep Learning Indaba presents the state of Natural Language Processing in 2018
Read more
  • 0
  • 0
  • 20557

article-image-node-js-v10-12-0-current-released
Sugandha Lahoti
11 Oct 2018
4 min read
Save for later

Node.js v10.12.0 (Current) released

Sugandha Lahoti
11 Oct 2018
4 min read
Node.js v10.12.0 was released, yesterday, with notable changes to assert, cli, crypto, fs, and more. However, the Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Hence throughout the v10.12.0 documentation are indications of a section's stability. Let’s look at the notable changes which are stable. Assert module Changes have been made to assert. The assert module provides a simple set of assertion tests that can be used to test invariants. It comprises of a strict mode and a legacy mode, although it is recommended to only use strict mode. In Node.js v10.12.0, the diff output is now improved by sorting object properties when inspecting the values that are compared with each other. Changes to cli The command line interface in Node.js v10.12.0 has two improvements: The options parser now normalizes _ to - in all multi-word command-line flags, e.g. --no_warnings has the same effect as --no-warnings. It also includes bash completion for the node binary. Users can generate a bash completion script with run node --completion-bash. The output can be saved to a file which can be sourced to enable completion. Crypto Module The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. In Node.js v10.12.0, crypto adds support for PEM-level encryption. It also supports API asymmetric key pair generation. The new methods crypto.generateKeyPair and crypto.generateKeyPairSync can be used to generate public and private key pairs. The API supports RSA, DSA and EC and a variety of key encodings (both PEM and DER). Improvements to file system The fs module provides an API for interacting with the file system in a manner closely modeled around standard POSIX functions. Node.js v10.12.0 adds a recursive option to fs.mkdir and fs.mkdirSync. On setting this option to true, non-existing parent folders will be automatically created. Updates to Http/2 The http2 module provides an implementation of the HTTP/2 protocol. The new node.js version adds support for a 'ping' event to Http2Session that is emitted whenever a non-ack PING is received. Support is also added for the ORIGIN frame.  Also, nghttp2 is updated to v1.34.0. This adds RFC 8441 extended connect protocol support to allow the use of WebSockets over HTTP/2. Changes in module In the Node.js module system, each file is treated as a separate module. Module has also been updated in v10.12.0. It adds module.createRequireFromPath(filename). This new method can be used to create a custom require function that will resolve modules relative to the filename path. Improvements to process The process object is a global that provides information about, and control over, the current Node.js process. Process adds a 'multipleResolves' process event that is emitted whenever a Promise is attempted to be resolved multiple times. Updates to url Node.js v10.12.0 adds url.fileURLToPath(url) and url.pathToFileURL(path). These methods can be used to correctly convert between file: URLs and absolute paths. Changes in Utilities The util module is primarily designed to support the needs of Node.js' own internal APIs. The changes in Node.js v10.12.0 include: A new sorted option is added to util.inspect(). If set to true, all properties of an object and Set and Map entries will be sorted in the returned string. If set to a function, it is used as a compare function. The util.instpect.custom symbol is now defined in the global symbol registry as Symbol.for('nodejs.util.inspect.custom'). Support for BigInt numbers in util.format() are also added. Improvements in V8 API The V8 module exposes APIs that are specific to the version of V8 built into the Node.js binary. A number of V8 C++ APIs in v10.12.0 have been marked as deprecated since they have been removed in the upstream repository. Replacement APIs are added where necessary. Changes in Windows The Windows msi installer now provides an option to automatically install the tools required to build native modules. You can find the list of full changes on the Node.js Blog. Node.js and JS Foundation announce intent to merge; developers have mixed feelings. Node.js announces security updates for all their active release lines for August 2018. Deploying Node.js apps on Google App Engine is now easy.
Read more
  • 0
  • 0
  • 20551

article-image-why-did-slack-suffer-an-outage-on-friday
Fatema Patrawala
01 Jul 2019
4 min read
Save for later

Why did Slack suffer an outage on Friday?

Fatema Patrawala
01 Jul 2019
4 min read
On Friday, Slack, an instant messaging platform for work spaces confirmed news of the global outage. Millions of users reported disruption in services due to the outage which occurred early Friday afternoon. Slack experienced a performance degradation issue impacting users from all over the world, with multiple services being down. Yesterday the Slack team posted a detailed incident summary report of the service restoration. The Slack status page read: “On June 28, 2019 at 4:30 a.m. PDT some of our servers became unavailable, causing degraded performance in our job processing system. This resulted in delays or errors with features such notifications, unfurls, and message posting. At 1:05 p.m. PDT, a separate issue increased server load and dropped a large number of user connections. Reconnection attempts further increased the server load, slowing down customer reconnection. Server capacity was freed up eventually, enabling all customers to reconnect by 1:36 p.m. PDT. Full service restoration was completed by 7:20 p.m. PDT. During this period, customers faced delays or failure with a number of features including file uploads, notifications, search indexing, link unfurls, and reminders. Now that service has been restored, the response team is continuing their investigation and working to calculate service interruption time as soon as possible. We’re also working on preventive measures to ensure that this doesn’t happen again in the future. If you’re still running into any issues, please reach out to us at feedback@slack.com.” https://twitter.com/SlackStatus/status/1145541218044121089 These were the various services which were affected due to outage: Notifications Calls Connections Search Messaging Apps/Integrations/APIs Link Previews Workspace/Org Administration Posts/Files Timeline of Friday’s Slack outage According to user reports it was observed that some Slack messages were not delivered with users receiving an error message. On Friday, at 2:54 PM GMT+3, Slack status page gave the initial signs of the issue,  "Some people may be having an issue with Slack. We’re currently investigating and will have more information shortly. Thank you for your patience,". https://twitter.com/SlackStatus/status/1144577107759996928 According to the Down Detector, Slack users noted that message editing also appeared to be impacted by the latest bug. Comments indicated it was down around the world, including Sweden, Russia, Argentina, Italy, Czech Republic, Ukraine and Croatia. The Slack team continued to give updates on the issue, and on Friday evening they reported of services getting back to normal. https://twitter.com/SlackStatus/status/1144806594435117056 This news gained much attraction on Twitter, as many of them commented saying Slack is already preps up for the weekend. https://twitter.com/RobertCastley/status/1144575285980999682 https://twitter.com/Octane/status/1144575950815932422 https://twitter.com/woutlaban/status/1144577117788790785   Users on Hacker News compared Slack with other messaging platforms like Mattermost, Zulip chat, Rocketchat etc. One of the user comments read, “Just yesterday I was musing that if I were King of the (World|Company) I'd want an open-source Slack-alike that I could just drop into the Cloud of my choice and operate entirely within my private network, subject to my own access control just like other internal services, and with full access to all message histories in whatever database-like thing it uses in its Cloud. Sure, I'd still have a SPOF but it's game over anyway if my Cloud goes dark. Is there such a project, and if so does it have any traction in the real world?” To this another user responded, “We use this at my company - perfectly reasonable UI, don't know about the APIs/integrations, which I assume are way behind Slack…” Another user also responded, “Zulip, Rocket.Chat, and Mattermost are probably the best options.” Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Dropbox gets a major overhaul with updated desktop app, new Slack and Zoom integration Slack launches Enterprise Key Management (EKM) to provide complete control over encryption keys  
Read more
  • 0
  • 0
  • 20539
article-image-cloudflare-and-google-chrome-add-http-3-and-quic-support-mozilla-firefox-soon-to-follow-suit
Bhagyashree R
30 Sep 2019
5 min read
Save for later

Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit

Bhagyashree R
30 Sep 2019
5 min read
Major web companies are adopting HTTP/3, the latest iteration of the HTTP protocol, in their experimental as well as production systems. Last week, Cloudflare announced that its edge network now supports HTTP/3. Earlier this month, Google’s Chrome Canary added support for HTTP/3 and Mozilla Firefox will soon be shipping support in a nightly release this fall. The ‘curl’ command-line client also has support for HTTP/3. In an announcement, Cloudflare shared that customers can turn on HTTP/3 support for their domains by enabling an option in their dashboards. “We’ve been steadily inviting customers on our HTTP/3 waiting list to turn on the feature (so keep an eye out for an email from us), and in the coming weeks we’ll make the feature available to everyone,” the company added. Last year, Cloudflare announced preliminary support for QUIC and HTTP/3. Customers could also join a waiting list to try QUIC and  HTTP/3 as soon as they become available. Those customers who are on the waiting list and have received an email from Cloudflare can enable the support by flipping the switch from the "Network" tab on the Cloudflare dashboard. Cloudflare further added, “We expect to make the HTTP/3 feature available to all customers in the near future.” Cloudflare’s HTTP/3 and QUIC support is backed by quiche. It is an implementation of the QUIC transport protocol and HTTP/3 written in Rust. It provides a low-level API for processing QUIC packets and handling connection state. Why HTTP/3 is introduced HTTP 1.0 required the creation of a new TCP connection for each request/response exchange between the client and the server, which resulted in latency and scalability issues. To resolve these issues, HTTP/1.1 was introduced. It included critical performance improvements such as keep-alive connections, chunked encoding transfers, byte-range requests, additional caching mechanisms, and more. The keep-alive or persistent connections allowed clients to reuse TCP connections. A keep-alive connection eliminated the need to constantly perform the initial connection establishment step. It also reduced the slow start across multiple requests. However, there were still some limitations. Multiple requests were able to share a single TCP connection, but they still needed to be serialized on after the other. This meant that the client and server could execute only a single request/response exchange at a time for each connection. HTTP/2 tried to solve this problem by introducing the concept of HTTP streams. This allowed the transmission of multiple requests/responses over the same connection at the same time. However, the drawback here is that in case of network congestion all requests and responses will be equally affected by packet loss, even if the data that is lost only concerns a single request. HTTP/3 aims to address the problems in the previous versions of HTTP.  It uses a new transport protocol called Quick UDP Internet Connections (QUIC) instead of TCP. The QUIC transport protocol comes with features like stream multiplexing and per-stream flow control. Here’s a diagram depicting the communication between client and server using QUIC and HTTP/3: Source: Cloudflare HTTP/3 provides reliability at the stream level and congestion control across the entire connection. QUIC streams share the same QUIC connection so no additional handshakes are required. As QUIC streams are delivered independently, packet loss affecting one stream will not affect the others. QUIC also combines the typical three-way TCP handshake with TLS 1.3 handshake to provide. This provides users encryption and authentication by default and enables faster connection establishment. “In other words, even when a new QUIC connection is required for the initial request in an HTTP session, the latency incurred before data starts flowing is lower than that of TCP with TLS,” Cloudflare explains. On Hacker News, a few users discussed the differences between HTTP/1, HTTP/2, and HTTP/3. Comparing the three a user commented, “Not aware of benchmarks, but specification-wise I consider HTTP2 to be a regression...I'd rate them as follows: HTTP3 > HTTP1.1 > HTTP2 QUIC is an amazing protocol...However, the decision to make HTTP2 traffic go all through a single TCP socket is horrible and makes the protocol very brittle under even the slightest network delay or packet loss...Sure it CAN work better than HTTP1.1 under ideal network conditions, but any network degradation is severely amplified, to a point where even for traffic within a datacenter can amplify network disruption and cause an outage. HTTP3, however, is a refinement on those ideas and gets pretty much everything right afaik.” Some expressed that the creators of HTTP/3 should also focus on the “real” issues of HTTP including proper session support and getting rid of cookies. Others appreciated this step saying, “It's kind of amazing seeing positive things from monopolies and evergreen updates. These institutions can roll out things fast. It's possible in hardware too-- remember Bell Labs in its hay days?” These were some of the advantages HTTP/3 and QUIC provide over HTTP/2. Read the official announcement by Cloudflare to know more in detail. Cloudflare plans to go public; files S-1 with the SEC Cloudflare finally launches Warp and Warp Plus after a delay of more than five months Cloudflare RCA: Major outage was a lot more than “a regular expression went bad”
Read more
  • 0
  • 0
  • 20537

article-image-announces-general-availability-of-azure-sql-data-sync
Pravin Dhandre
22 Jun 2018
2 min read
Save for later

Microsoft announces general availability of Azure SQL Data Sync

Pravin Dhandre
22 Jun 2018
2 min read
The Azure team at Microsoft were highly excited to release the general availability of Azure SQL Data Sync tool for synchronization with their on-premises databases. This new tool allows database administrators to synchronize the data access between Azure SQL Database and any other SQL hosted server or local servers, both unidirectionally and bidirectionally. This new data sync tool allows you to distribute your data apps globally with a local replication available in each region, keeping data synchronization continuous across all the regions. This tool would help to significantly eradicate the connection failure and eliminate the issues related to network latency. It will also boost the response time of the applications and enhance the reliability of the application run time. Features/Capabilities of Azure SQL data Sync: Easy-to-Config - Simple and better configuration of database workflow with exciting user experience Speedy and reliable database schema refresh - Faster loading of database schemas with new Server Management Objects (SMO) library Security for Data Sync - End-to-end encryption provided for both unidirectional and bi-directional data flows with GDPR compliance. However, this particular tool would not be a true friend to DBAs as it does not support disaster recovery task. Microsoft has also made it very clear that this technology would not be supporting scaling Azure workloads, nor the Azure’s Database Migration Service. Check out the Azure SQL Data Sync Setup documentation to get started. To know more details, you can refer to the official announcement at official Microsoft web page. Get SQL Server user management right Top 10 MySQL 8 performance benchmarking aspects to know Data Exploration using Spark SQL
Read more
  • 0
  • 0
  • 20533
Modal Close icon
Modal Close icon