Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-new-amazon-rds-on-graviton2-processors-from-aws-news-blog
Matthew Emerick
15 Oct 2020
3 min read
Save for later

New – Amazon RDS on Graviton2 Processors from AWS News Blog

Matthew Emerick
15 Oct 2020
3 min read
I recently wrote a post to announce the availability of M6g, R6g and C6g families of instances on Amazon Elastic Compute Cloud (EC2). These instances offer better cost-performance ratio than their x86 counterparts. They are based on AWS-designed AWS Graviton2 processors, utilizing 64-bit Arm Neoverse N1 cores. Starting today, you can also benefit from better cost-performance for your Amazon Relational Database Service (RDS) databases, compared to the previous M5 and R5 generation of database instance types, with the availability of AWS Graviton2 processors for RDS. You can choose between M6g and R6g instance families and three database engines (MySQL 8.0.17 and higher, MariaDB 10.4.13 and higher, and PostgreSQL 12.3 and higher). M6g instances are ideal for general purpose workloads. R6g instances offer 50% more memory than their M6g counterparts and are ideal for memory intensive workloads, such as Big Data analytics. Graviton2 instances provide up to 35% performance improvement and up to 52% price-performance improvement for RDS open source databases, based on internal testing of workloads with varying characteristics of compute and memory requirements. Graviton2 instances family includes several new performance optimizations such as larger L1 and L2 caches per core, higher Amazon Elastic Block Store (EBS) throughput than comparable x86 instances, fully encrypted RAM, and many others as detailed on this page. You can benefit from these optimizations with minimal effort, by provisioning or migrating your RDS instances today. RDS instances are available in multiple configurations, starting with 2 vCPUs, with 8 GiB memory for M6g, and 16 GiB memory for R6g with up to 10 Gbps of network bandwidth, giving you new entry-level general purpose and memory optimized instances. The table below shows the list of instance sizes available for you: Instance Size vCPU Memory (GiB) Dedicated EBS Bandwidth (Mbps) Network Bandwidth (Gbps) M6g R6g large 2 8 16 Up to 4750 Up to 10 xlarge 4 16 32 Up to 4750 Up to 10 2xlarge 8 32 64 Up to 4750 Up to 10 4xlarge 16 64 128 4750 Up to 10 8xlarge 32 128 256 9000 12 12xlarge 48 192 384 13500 20 16xlarge 64 256 512 19000 25 Let’s Start Your First Graviton2 Based Instance To start a new RDS instance, I use the AWS Management Console or the AWS Command Line Interface (CLI), just like usual, and select one of the db.m6g or db.r6ginstance types (this page in the documentation has all the details). Using the CLI, it would be: aws rds create-db-instance --region us-west-2 --db-instance-identifier $DB_INSTANCE_NAME --db-instance-class db.m6g.large --engine postgres --engine-version 12.3 --allocated-storage 20 --master-username $MASTER_USER --master-user-password $MASTER_PASSWORD The CLI confirms with: { "DBInstance": { "DBInstanceIdentifier": "newsblog", "DBInstanceClass": "db.m6g.large", "Engine": "postgres", "DBInstanceStatus": "creating", ... } Migrating to Graviton2 instances is easy, in the AWS Management Console, I select my database and I click Modify. The I select the new DB instance class: Or, using the CLI, I can use the modify-db-instance API call. There is a short service interruption happening when you switch instance type. By default, the modification will happen during your next maintenance window, unless you enable the ApplyImmediately option. You can provision new or migrate to Graviton2 Amazon Relational Database Service (RDS) instances in all regions where EC2 M6g and R6g are available : US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Frankfurt) AWS Regions. As usual, let us know your feedback on the AWS Forum or through your usual AWS contact. -- seb
Read more
  • 0
  • 0
  • 19941

article-image-announcing-homebrew-2-0-0
Melisha Dsouza
04 Feb 2019
2 min read
Save for later

Announcing Homebrew 2.0.0!

Melisha Dsouza
04 Feb 2019
2 min read
Just a month after Homebrew 1.9.0 was released, Homebrew 2.0.0 was released last week with official support for Linux and Windows 10 (with Windows Subsystem for Linux), an automatic running of brew cleanup, removal of support for OS X Mountain Lion ( for versions10.8 and below), and much more. Features of Homebrew 2.0.0 Homebrew now offers official support for Linux and Windows 10 with Windows Subsystem for Linux (WSL). brew cleanup runs periodically (every 30 days) and is triggered automatically for individual formula cleanup on reinstall, install or upgrade. Homebrew has dropped its support for OS X Mountain Lion (10.8 and below). With this new version, Homebrew will now offer better performance, as its portable Ruby is now built on OS X Mavericks (10.9). Debugging git issues is now easier as brew update-reset resets all repositories and taps to their upstream versions. Homebrew reduces errors when formulae are built from source and also allows the removal of many workarounds for niche issues by filtering all user environment variables. The team is now attempting to migrate away from Jenkins to a suitable hosted CI provider after a security researcher, Eric Holmes,  identified a GitHub personal access token leak from Homebrew’s Jenkins in 2018. Homebrew Cask’s downloads are quarantined to provide the same level of security while manually downloading these tools. Homebrew uses a proper option parser so as to generate the man brew and --help. Users can now expect better feedback on inputting invalid options to formulae or commands. Thus making argument handling more simple and robust. To know about the other features in detail, head over to Hombrew’s official page. Conda 4.6.0 released with support for more shells, better interoperability among others Mozilla releases Firefox 65 with support for AV1, enhanced tracking protection, and more! Typescript 3.3 is finally released!  
Read more
  • 0
  • 0
  • 19923

article-image-atlassian-acquires-opsgenie-launches-jira-ops-to-make-incident-response-more-powerful
Bhagyashree R
05 Sep 2018
2 min read
Save for later

Atlassian acquires OpsGenie, launches Jira Ops to make incident response more powerful

Bhagyashree R
05 Sep 2018
2 min read
Yesterday, Atlassian made two major announcements, the acquisition of OpsGenie and the release of Jira Ops. Both these products aim to help IT operations teams resolve downtime quickly and reduce the occurrence of these incidents over time. Atlassian is an Australian enterprise software company that develops collaboration software for teams with products including JIRA, Confluence, HipChat, Bitbucket, and Stash. OpsGenie: Alert the right people at the right time Source: Atlassian OpsGenie is an IT alert and notification management tool that helps notify critical alerts to all the right people (operations and software development teams). It uses a sophisticated combination of scheduling, escalation paths, and notifications that take things like time zone and holidays into account. OpsGenie is a prompt and reliable alerting system, which comes with the following features: It is integrated with monitoring, ticketing, and chat tools, to notify the team using multiple channels, providing the necessary information for your team to immediately begin resolution. It provides various notification methods such as, email, SMS, push, phone call, and group chat to ensure alerts are seen by the users. You can build and modify schedules and define escalation rules within one interface. It tracks everything related to alerts and incidents, which helps you to gain insight into areas of success and opportunities for improvement. You can define escalation policies and on-call schedules with rotations to notify the right people and escalate when necessary. Jira Ops: Resolve incidents faster Source: Atlassian Jira Ops is an unified incident command center that provides the response team with a single place for response coordination. It is integrated with OpsGenie, Slack, Statuspage, PagerDuty, and xMatters. It guides the response team through the response workflow and automates common steps such as creating a new Slack room for each incident. Jira Ops is available through Atlassian’s early access program. Jira Ops enables you to resolve a downtime quickly by providing the following functionalities: It quickly alerts you about what is affected and what the associated impacts are. You can check the status, severity level, and duration of the incident. You can see real-time response activities. You can also find the associated Slack channel, current incident manager, and technical lead. You can find more details on OpsGenie and Jira Ops on Atlassian’s official website. Atlassian sells Hipchat IP to Slack Atlassian open sources Escalator, a Kubernetes autoscaler project Docker isn’t going anywhere
Read more
  • 0
  • 0
  • 19921

article-image-deepmind-introduces-openspiel-a-reinforcement-learning-based-framework-for-video-games
Savia Lobo
28 Aug 2019
3 min read
Save for later

DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games

Savia Lobo
28 Aug 2019
3 min read
A few days ago, researchers at DeepMind introduced OpenSpiel, a framework for writing games and algorithms for research in general reinforcement learning and search/planning in games. The core API and games are implemented in C++ and exposed to Python. Algorithms and tools are written both in C++ and Python. It also includes a branch of pure Swift in the swift subdirectory. In their paper, the researchers write, “We hope that OpenSpiel could have a similar effect on general RL in games as the Atari Learning Environment has had on single-agent RL.” Read Also: Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football OpenSpiel allows evaluating written games and algorithms on a variety of benchmark games as it includes implementations of over 20 different games types including simultaneous move, perfect and imperfect information games, gridworld games, an auction game, and several normal-form / matrix games, etc. It includes tools to analyze learning dynamics and other common evaluation metrics. It also supports n-player (single- and multi-agent) zero-sum, cooperative and general-sum, one-shot and sequential games, etc. OpenSpiel has been tested on Linux (Debian 10 and Ubuntu 19.04). However, the researchers have not tested the framework on MacOS or Windows. “since the code uses freely available tools, we do not anticipate any (major) problems compiling and running under other major platforms,” the researchers added. The purpose of OpenSpiel is to promote “general multiagent reinforcement learning across many different game types, in a similar way as general game-playing but with a heavy emphasis on learning and not in competition form,”  the researcher paper mentions. This framework is “designed to be easy to install and use, easy to understand, easy to extend (“hackable”), and general/broad.” Read Also: DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games Design constraints for OpenSpiel The two main design criteria that OpenSpiel is based on include: Simplicity: OpenSpiel provides easy-to-read, easy-to-use code that can be used to learn from and to build a prototype rather than a fully-optimized code that would require additional assumptions. Dependency-free: Researchers say, “dependencies can be problematic for long-term compatibility, maintenance, and ease-of-use.” Hence, the OpenSpiel framework does not introduce dependencies thus keeping it portable and easy to install. Swift OpenSpiel: A port to use Swift for TensorFlow The swift/ folder contains a port of OpenSpiel to use Swift for TensorFlow. This Swift port explores using a single programming language for the entire OpenSpiel environment, from game implementations to the algorithms and deep learning models. This Swift port is intended for serious research use. As the Swift for TensorFlow platform matures and gains additional capabilities (e.g. distributed training), expect the kinds of algorithms that are expressible and tractable to train to grow significantly. While OpenSpiel has some tools for visualization and evaluation, the α-Rank algorithm is also a tool. The α-Rank algorithm leverages evolutionary game theory to rank AI agents interacting in multiplayer games. OpenSpiel currently supports using α-Rank for both single-population (symmetric) and multi-population games. Developers are excited about this release and want to try out this framework. https://twitter.com/SMBrocklehurst/status/1166435811581202443 https://twitter.com/sharky6000/status/1166349178412261376 To know more about this news in detail, head over to the research paper. You can also check out the GitHub page. Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube DeepCode, the AI startup for code review, raises $4M seed funding; will be free for educational use and enterprise teams with 30 developers Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks
Read more
  • 0
  • 0
  • 19898

article-image-gremlin-makes-chaos-engineering-with-docker-easier-with-new-container-discovery-feature
Richard Gall
28 Aug 2018
3 min read
Save for later

Gremlin makes chaos engineering with Docker easier with new container discovery feature

Richard Gall
28 Aug 2018
3 min read
Gremlin, the product that's bringing chaos engineering to a huge range of organizations, announced today that it has added a new feature to its product: container discovery. Container discovery will make it easier to run chaos engineering tests alongside Docker. Chaos engineering and containers have always been closely related - arguably the loosely coupled architectural style of modern software driven by containers has, in turn, led to an increased demand for chaos engineering to improve software resiliency. Matt Fornaciari, Gremlin CTO, explains that "with today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production." Read next: How Gremlin is making chaos engineering accessible [Interview] What does Gremlin's new container discovery feature do? Container discovery will do 2 things: it will make it easier for engineers to identify specific Docker containers, but more importantly, it will also allow them to simulate attacks or errors within those containerized environments. The real benefit of this is that it makes the testing process so much easier for engineers. Containers are, the press release notes, "often highly dynamic, ephemeral, and difficult to pinpoint at a given moment," which means identifying and isolating a particular container to run a 'chaos test' on can ordinarily be very challenging and time consuming. Gremlin has been working with the engineering team at Under Armour. Paul Osman, Senior Engineering Manager says that "we use Gremlin to test various failure scenarios and build confidence in the resiliency of our microservices." This new feature could save the engineers a lot of time, as he explains: "the ability to target containerized services with an easy-to-use UI has reduced the amount of time it takes us to do fault injection significantly." Read next: Chaos Engineering: managing complexity by breaking things Why is Docker such a big deal for Gremlin? As noted above, chaos engineering and containers are part of the same wider shift in software architectural styles. With Docker leading the way when it comes to containerization - its market share growing healthily - making it easier to perform resiliency tests on containers is incredibly important for the product. It's not a stretch to say that Gremlin have probably been working on this feature for some time, with users placing it high on their list of must-haves. Chaos engineering is still in its infancy - this year's Skill Up report found that it still remains on the periphery of many developer's awareness. However, that could quickly change, and it appears that Gremlin are working hard to make chaos engineering not only more accessible but also more appealing to companies for whom software resiliency is essential.
Read more
  • 0
  • 0
  • 19876

article-image-python-comes-third-in-tiobe-popularity-index-for-the-first-time
Prasad Ramesh
10 Sep 2018
2 min read
Save for later

Python comes third in TIOBE popularity index for the first time

Prasad Ramesh
10 Sep 2018
2 min read
Python made it to the TIOBE index in the third position for the first time in its history. The TIOBE programming community index is a common measure of programming language popularity. It is created and maintained by the TIOBE company based in the Netherlands. The popularity in the index is calculated based on the number of search engine results for search queries with the name of the language. They consider searches from Google, Google Blogs, MSN, Yahoo!, Baidu, Wikipedia, and YouTube. The TIOBE index is updated once a month. Source: TIOBE Python is third behind Java and C. Python’s rating is 7.653 percent while Java had a rating of 17.436 percent. C was in the second place rated at 15.447 percent. Python moved above C++ to be placed third. C++ was third last month and now is in the fourth place this month, with a rating of 7.394 percent. Python has increasing ubiquity, being used in many research areas like AI and machine learning which are all the buzz today. The increasing popularity is not surprising as Python has versatile applications. AI and machine learning, software development, web development, scripting, scientific applications, and even games, you name it. Python is easy to install, learn, use, and deploy. The syntax is also very simple and beginner friendly. TIOBE states that this third place took a really took a long time. “At the beginning of the 1990s it entered the chart. Then it took another 10 years before it reached the TIOBE index top 10 for the first time. After that it slowly but surely approached the top 5 and eventually the top 3.“ Python has also been the language of the year in the index for the years 2007 and 2010. The current top 5 languages are Java, C, Python, C++, and Visual Basic .NET. To read more and to view the complete list of the index, visit the TIOBE website. Build a custom news feed with Python [Tutorial] Home Assistant: an open source Python home automation hub to rule all things smart Build botnet detectors using machine learning algorithms in Python [Tutorial]
Read more
  • 0
  • 0
  • 19866
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-releases-flutter-1-9-at-gdd-google-developer-days-conference
Amrata Joshi
13 Sep 2019
3 min read
Save for later

Google releases Flutter 1.9 at GDD (Google Developer Days) conference

Amrata Joshi
13 Sep 2019
3 min read
Last week, the team behind Flutter made an announcement at Google Developer Days about the stable release of Flutter 1.9. Flutter 1.9 has received more than 1,500 PRs (Pull Requests) from more than 100 contributors. It comes with support for macOS Catalina and iOS 13, improved tooling support, new Dart language features, new Material widgets and much more. The team also announced the successful integration of Flutter’s web support into the main Flutter repository that will allow developers to write for desktop, mobile as well as web with the same codebase. Tencent, the well-known internet brand also uses Flutter in their mobile apps. https://twitter.com/Appinventiv/status/1171689785733173248 https://twitter.com/ZoeyFan723/status/1171566234892210176   What’s new in Flutter 1.9 Support for macOS Catalina and iOS 13 Since Apple is planning to release Catalina, the latest version of macOS, the team  at Flutter has updated the end-to-end tooling experience so that it works properly with Catalina and Xcode 11. Support has been added for the new Xcode build system that enables 64-bit support throughout the toolchain and simplifies platform dependencies. This release also includes an implementation of the iOS 13 draggable toolbar, along with support for vibration feedback, long-press and drag-from-right. The team is also working on iOS dark mode that has a number of pull requests already merged. Flutter users can now turn on experimental support for Bitcode that is Apple’s platform-independent intermediate representation of a compiled program. Material components in Flutter 1.9 The Material design components and features have been updated in Flutter 1.9. This release comes with new widgets that include ToggleButtons and ColorFiltered. Dart 2.5  As a part of the Flutter 1.9 release, the team is also releasing Dart 2.5 that includes support for pre-release of Foreign Function Interface (FFI). New projects default to Swift and Kotlin in Flutter 1.9 With this release, new projects default to Swift and Kotlin instead of Objective-C and Java for iOS and Android respectively. Since a lot of packages are written in Swift, making it as a default language would remove the manual work for adding those packages. Flutter on the web The team also announced that the flutter_web repository has been deprecated and web support has been merged into the main flutter repository. It seems users are quite excited about this news. https://twitter.com/max_myracle/status/1171530782340304899 https://twitter.com/annnoo96/status/1171442355875938304 To know more about this news, check out the official post. Other interesting news in mobile Apple Music is now available on your web browser Android 10 releases with gesture navigation, dark theme, smart reply, live captioning, privacy improvements and updates to security Is Apple’s ‘Independent Repair Provider Program’ a bid to avoid the ‘Right To Repair’ bill?  
Read more
  • 0
  • 0
  • 19861

article-image-grafana-labs-announces-general-availability-of-loki-1-0-a-multi-tenant-log-aggregation-system
Savia Lobo
20 Nov 2019
3 min read
Save for later

Grafana Labs announces general availability of Loki 1.0, a multi-tenant log aggregation system

Savia Lobo
20 Nov 2019
3 min read
Today, at the ongoing KubeCon 2019, Grafana Labs, an open source analytics and monitoring solution provider, announced that Loki version 1.0 is generally available for production use. Loki is an open source logging platform that provides developers with an easy-to-use, highly efficient and cost-effective approach to log aggregation. The Loki project was first introduced at KubeCon Seattle in 2018. Before the official launch, this project was started inside of Grafana Labs and was internally used to monitor all of Grafana Labs’ infrastructure. It helped ingest around 1.5TB/10 billion log lines a day. Released under the Apache 2.0 license, the Loki tool is optimized for Grafana, Kubernetes, and Prometheus. Just within a year, the project has more than 1,000 contributions from 137 contributors and also has nearly 8,000 stars on GitHub. With Loki 1.0, users can instantaneously switch between metrics and logs, preserving context and reducing MTTR. By storing compressed, unstructured logs and only indexing metadata, Loki is cost-effective and simple to operate by design. It includes a set of components that can be composed into a fully-featured logging stack. Grafana Cloud offers a high-performance, hosted Loki service that allows users to store all logs together in a single place with usage-based pricing. Loki’s design is inspired by Prometheus, the open source monitoring solution for the cloud-native ecosystem, as it offers a Prometheus-like query language called LogQL to further integrate with the cloud-native ecosystem. Tom Wilkie, VP of Product at Grafana Labs, said, “Grafana Labs is proud to have created Loki and fostered the development of the project, building first-class support for Loki into Grafana and ensuring customers receive the support and features they need.” He further added, “We are committed to delivering an open and composable observability platform, of which Loki is a key component, and continue to rely on the power of open source and our community to enhance observability into application and infrastructure.” Grafana Labs also offers enterprise services and support for Loki, which includes: Support and training from Loki maintainers and experts 24 x 7 x 365 coverage from the geographically distributed Grafana team Per-node pricing that scales with deployment Read more about Grafana Loki in detail on GitHub. “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more! Grafana 6.2 released with improved security, enhanced provisioning, Bar Gauge panel, lazy loading and more
Read more
  • 0
  • 0
  • 19856

article-image-numpy-1-15-0-release-is-out
Savia Lobo
24 Jul 2018
2 min read
Save for later

NumPy 1.15.0 release is out!

Savia Lobo
24 Jul 2018
2 min read
NumPy 1.15.0 is out and is said to include a lot of changes including several cleanups, deprecations of old functions. It also includes improvements to many existing functions. The Python versions supported by NumPy 1.15.0 are 2.7, 3.4 to 3.7. Some of the highlights in this release include NumPy has switched to pytest for testing as this version no longer contains the maintained nose framework. However, the old nose based interface is still available for downstream projects. A new numpy.printoptions context manager can now set print options temporarily for the scope of the with block:: with np.printoptions(precision=2): ... print(np.array([2.0]) / 3) [0.67] Improvements to the histogram functions. This version includes numpy.histogram_bin_edges, a function to get the edges of the bins used by a histogram without needing to calculate the histogram. Support for unicode field names in python 2.7. Improved support for PyPy. Fixes and improvements to numpy.einsum, which evaluates Einstein summation convention on the operands. New features in the NumPy 1.15.0 Added np.gcd and np.lcm ufuncs for integer and objects types Both np.gcd and np.lcm used for computing the greatest common divisor, and the lowest common multiple respectively. These work on all the numpy integer types, as well as the built in arbitrary-precision Decimal and long types. Support for cross-platform builds for iOS The build system in this version has been modified to add support for the _PYTHON_HOST_PLATFORM environment variable, used by distutils when compiling on one platform for another platform. This makes it possible to compile NumPy for iOS targets. Addition of return_indices keyword for np.intersect1d New keyword return_indices returns the indices of the two input arrays that correspond to the common elements. Build system This version has an added experimental support for the 64-bit RISC-V architecture. Future Changes expected in the further versions Both NumPy 1.16 and NumPy 1.17 will be dropping support for Python 3.4 and Python 2.7 respectively. Read more about this release in detail on its GitHub Page Implementing matrix operations using SciPy and NumPy NumPy: Commonly Used Functions Installing NumPy, SciPy, matplotlib, and IPython  
Read more
  • 0
  • 0
  • 19841

article-image-new-redis-6-compatibility-for-amazon-elasticache-from-aws-news-blog
Matthew Emerick
07 Oct 2020
5 min read
Save for later

New – Redis 6 Compatibility for Amazon ElastiCache from AWS News Blog

Matthew Emerick
07 Oct 2020
5 min read
After the last Redis 5.0 compatibility for Amazon ElastiCache, there has been lots of improvements to Amazon ElastiCache for Redis including upstream supports such as 5.0.6. Earlier this year, we announced Global Datastore for Redis that lets you replicate a cluster in one region to clusters in up to two other regions. Recently we improved your ability to monitor your Redis fleet by enabling 18 additional engine and node-level CloudWatch metrics. Also, we added support for resource-level permission policies, allowing you to assign AWS Identity and Access Management (IAM) principal permissions to specific ElastiCache resource or resources. Today, I am happy to announce Redis 6 compatibility to Amazon ElastiCache for Redis. This release brings several new and important features to Amazon ElastiCache for Redis: Managed Role-Based Access Control – Amazon ElastiCache for Redis 6 now provides you with the ability to create and manage users and user groups that can be used to set up Role-Based Access Control (RBAC) for Redis commands. You can now simplify your architecture while maintaining security boundaries by having several applications use the same Redis cluster without being able to access each other’s data. You can also take advantage of granular access control and authorization to create administration and read-only user groups. Amazon ElastiCache enhances the new Access Control Lists (ACL) introduced in open source Redis 6 to provide a managed RBAC experience, making it easy to set up access control across several Amazon ElastiCache for Redis clusters. Client-Side Caching – Amazon ElastiCache for Redis 6 comes with server-side enhancements to deliver efficient client-side caching to further improve your application performance. Redis clusters now support client-side caching by tracking client requests and sending invalidation messages for data stored on the client. In addition, you can also take advantage of a broadcast mode that allows clients to subscribe to a set of notifications from Redis clusters. Significant Operational Improvements – This release also includes several enhancements that improve application availability and reliability. Specifically, Amazon ElastiCache has improved replication under low memory conditions, especially for workloads with medium/large sized keys, by reducing latency and the time it takes to perform snapshots. Open source Redis enhancements include improvements to expiry algorithm for faster eviction of expired keys and various bug fixes. Note that open source Redis 6 also announced support for encryption-in-transit, a capability that is already available in Amazon ElastiCache for Redis 4.0.10 onwards. This release of Amazon ElastiCache for Redis 6 does not impact Amazon ElastiCache for Redis’ existing support for encryption-in-transit. In order to apply RBAC to a new or existing Redis 6 cluster, we first need to ensure you have a user and user group created. We’ll review the process to do this below. Using Role-Based Access Control – How it works An alternative to Authenticating Users with the Redis AUTH Command, Amazon ElastiCache for Redis 6 offers Role-Based Access Control (RBAC). With RBAC, you create users and assign them specific permissions via an Access String. If you want to create, modify, and delete users and user groups, you will need to select to the User Management and User Group Management sections in the ElastiCache console. ElastiCache will automatically configure a default user with user ID and user name “default”, and then you can add it or new created users to new groups in User Group Management. If you want to change the default user with your own password and access setting, you need to create a new user with the username set to “default” and can then swap it with the original default user. We recommend using your own strong password for a default user. The following example shows how to swap the original default user with another default that has a modified access string via AWS CLI. $ aws elasticache create-user --user-id "new-default-user" --user-name "default" --engine "REDIS" --passwords "a-str0ng-pa))word" --access-string "off +get ~keys*" Create a user group and add the user you created previously. $ aws elasticache create-user-group --user-group-id "new-default-group" --engine "REDIS" --user-ids "default" Swap the new default user with the original default user. $ aws elasticache modify-user-group --user-group-id "new-default-group" --user-ids-to-add "new-default-user" --user-ids-to-remove "default" Also, you can modify a user’s password or change its access permissions using modify-user command, or remove a specific user using delete-user command. It will be removed from any user groups to which it belongs. Similarly you can modify a user group by adding new users and/or removing current users using modify-user-group command, or delete a user group using delete-user-group command. Note that the user group itself, not the users belonging to the group, will be deleted. Once you have created a user group and added users, you can assign the user group to a replication group, or migrate between Redis AUTH and RBAC. For more information, see the documentation in detail. Redis 6 cluster for ElastiCache – Getting Started As usual, you can use the ElastiCache Console, CLI, APIs, or a CloudFormation template to create to new Redis 6 cluster. I’ll use the Console, choose Redis from the navigation pane and click Create with the following settings: Select “Encryption in-transit” checkbox to ensure you can see the “Access Control” options. You can select an option of Access Control either User Group Access Control List by RBAC features or Redis AUTH default user. If you select RBAC, you can choose one of the available user groups. My cluster is up and running within minutes. You can also use the in-place upgrade feature on existing cluster. By selecting the cluster, click Action and Modify. You can change the Engine Version from 5.0.6-compatible engine to 6.x. Now Available Amazon ElastiCache for Redis 6 is now available in all AWS regions. For a list of ElastiCache for Redis supported versions, refer to the documentation. Please send us feedback either in the AWS forum for Amazon ElastiCache or through AWS support, or your account team. – Channy;
Read more
  • 0
  • 0
  • 19823
article-image-introducing-ballista-a-distributed-compute-platform-based-on-kubernetes-and-rust
Amrata Joshi
18 Jul 2019
3 min read
Save for later

Introducing Ballista, a distributed compute platform based on Kubernetes and Rust

Amrata Joshi
18 Jul 2019
3 min read
Andy Grove, a software engineer introduced Ballista, a distributed compute platform and in his recent blog post, he explained his journey on this project. Roughly around eighteen months ago, he started the DataFusion project, an in-memory query engine that uses Apache Arrow as the memory model. The aim was to build a distributed compute platform in Rust that can compete with Apache Spark but which turned out to be difficult for him. Grove writes in a blog post, “Unsurprisingly, this turned out to be an overly ambitious goal at the time and I fell short of achieving that. However, some very good things came out of this effort. We now have a Rust implementation of Apache Arrow with a growing community of committers, and DataFusion was donated to the Apache Arrow project as an in-memory query execution engine and is now starting to see some early adoption.” He then took a break from working on Arrow and DataFusion for a couple of months and focused on some deliverables at work.  He then started a new PoC (Proof of Concept) project which was his second attempt at building a distributed platform with Rust. But this time he had the advantage of already having Arrow and DataFusion in his plate. His new project is called Ballista, a distributed compute platform that is based on Kubernetes and the Rust implementation of Apache Arrow.  A Ballista cluster currently comprises of a number of individual pods within a Kubernetes cluster and it can be created and destroyed via the Ballista CLI. Ballista applications can be deployed to Kubernetes with the help of Ballista CLI and they use Kubernetes service discovery for connecting to the cluster. Since there is no distributed query planner yet, Ballista applications must manually build the query plans that need to be executed on the cluster.  To make this project practically work and push it beyond the limit of just a PoC, Grove listed some of the things on the roadmap for v1.0.0: First is to implement a distributed query planner. Then bringing support for all DataFusion logical plans and expressions. User code has to be supported as part of distributed query execution. They plan to bring support for interactive SQL queries against a cluster with gRPC. Support for Arrow Flight protocol and Java bindings. This PoC project will help in driving the requirements for DataFusion and it has already led to three DataFusion PRs that are being merged into the Apache Arrow codebase. It seems there are mixed reviews for this initiative, a user commented on HackerNews, “Hang in there mate :) I really don't think you deserve a lot of the crap you've been given in this thread. Someone has to try something new.” Another user commented, “The fact people opposed to your idea/work means it is valuable enough for people to say something against and not ignore it.” To know more about this news, check out the official announcement.  Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more Introducing Vector, a high-performance data router, written in Rust  
Read more
  • 0
  • 0
  • 19811

article-image-a-libre-gpu-effort-based-on-risc-v-rust-llvm-and-vulkan-by-the-developer-of-an-earth-friendly-computer
Prasad Ramesh
02 Oct 2018
2 min read
Save for later

A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer

Prasad Ramesh
02 Oct 2018
2 min read
An open-source libre GPU project is under the works by Luke Kenneth Casson Leighton. He is the hardware engineer who developed the EOMA68, an earth-friendly computer. The project already has access to $250k USD in funding. The basic idea for this "libre GPU" is to use a RISC-V processor. The GPU will be mostly software-based. It will leverage the LLVM compiler infrastructure and utilize a software-based Vulkan renderer to emit code and run on the RISC-V processor. The Vulkan implementation will be used for writing in the Rust programming language. The project's current road-map has details only on the software side of figuring out the RISC-V LLVM back-end state. Work is being done on writing a user-space graphics driver, implementing the necessary bits for the proposed RISC-V extensions like "Simple-V". While doing this, they will start figuring out the hardware design and the rest of the project. The road-map is quite simplified for the arduous task at hand. The website notes: “Once you've been through the "Extension Proposal Process" with Simple-V, it need never be done again, not for one single parallel / vector / SIMD instruction, ever again.” This process will include creating a fixed-function 3D "FP to ARGB" custom instruction, a custom extension with special 3D pipelines. With Simple-V, there is no need to worry about about how those operations would be parallelised. This is not a new concept, it's borrowed directly from videocore-iv. videocore-iv calls it "virtual parallelism". It's an enormous effort on both the software and hardware ends to come up with a RISC-V, Rust, LLVM, and Vulkan open-source combined project. It is difficult even with the funding considering it is a software based GPU. It is worth noting that the EOMA68 project was started by Luke in 2016 and raised over $227k USD from crowdfunding participants and hasn't shipped yet. To know more about this project, visit the libre risc-v website. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 19809

article-image-why-its-time-for-site-reliability-engineering-to-shift-left-from-devops-com
Matthew Emerick
16 Oct 2020
1 min read
Save for later

Why It’s Time for Site Reliability Engineering to Shift Left from DevOps.com

Matthew Emerick
16 Oct 2020
1 min read
By adopting a multilevel approach to site reliability engineering and arming your team with the right tools, you can unleash benefits that impact the entire service-delivery continuum In today’s application-driven economy, the infrastructure supporting business-critical applications has never been more important. In response, many companies are recruiting site reliability engineering (SRE) specialists to help them […] The post Why It’s Time for Site Reliability Engineering to Shift Left appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 19806
article-image-angular-localization-with-ivy-from-angular-blog-medium
Matthew Emerick
09 Sep 2020
5 min read
Save for later

Angular localization with Ivy from Angular Blog - Medium

Matthew Emerick
09 Sep 2020
5 min read
Part of the new Angular rendering engine, Ivy, includes a new approach to localizing applications — specifically extracting and translating text. This article explains the benefits and some of the implementation of this new approach. Prior to Ivy, the only way to add localizable messages to an Angular application was to mark them in component templates using the i18n attribute: <div i18n>Hello, World!</div> The Angular compiler would replace this text when compiling the template with different text if a set of translations was provided in the compiler configuration. The i18n tags are very powerful — they can be used in attributes as well as content; they can include complex nested ICU (International Components for Unicode) expressions; they can have metadata attached to them. See our i18n guide for more information. But there were some shortcomings to this approach. The most significant concern was that translation had to happen during template compilation, which occurs right at the start of the build pipeline. The result of this is that that full build, compilation-bundling-minification-etc, had to happen for each locale that you wanted to support in your application. (build times will vary based on project size) If a single build took 3 minutes, then the total build time to support 9 locales would be 3 mins x 9 locales = 27 mins. Moreover, it was not possible to mark text in application code for translation, only text in component templates. This resulted in awkward workarounds where artificial components were created purely to hold text that would be translated. Finally, it was not possible to load translations at runtime, which meant it was not possible for applications to be provided to an end-user who might want to provide translations of their own, without having to build the application themselves. The new localization approach is based around the concept of tagging strings in code with a template literal tag handler called $localize. The idea is that strings that need to be translated are “marked” using this tag: const message = $localize `Hello, World!`; This $localize identifier can be a real function that can do the translation at runtime, in the browser. But, significantly, it is also a global identifier that survives minification. This means it can act simply as a marker in the code that a static post-processing tool can use to replace the original text with translated text before the code is deployed. For example, the following code: warning = $localize `${this.process} is not right`; could be replace with: warning = "" + this.process + ", ce n'est pas bon."; The result is that all references to $localize are removed, and there is zero runtime cost to rendering the translated text. The Angular template compiler, for Ivy, has been redesigned to generate $localize tagged strings rather than doing the translation itself. For example the following template: <h1 i18n>Hello, World!</h1> would be compiled to something like: ɵɵelementStart(0, "h1"); // <h1>ɵɵi18n(1, $localize`Hello, World!`); // Hello, World!ɵɵelementEnd(); // </h1> This means that after the Angular compiler has completed its work all the template text marked with i18n attributes have been converted to $localize tagged strings which can be processed just like any other tagged string. Notice also that the $localize tagged strings can occur in any code (user code or generated from templates in both applications or libraries) and are not affected by minification, so while the post-processing tool might receive code that looks like this ...var El,kl=n("Hfs6"),Sl=n.n(kl);El=$localize`Hello, World!`;let Cl=(()=>{class e{constructor(e)... it is still able to identify and translate the tagged message. The result is that we can reorder the build pipeline to do translation at the very end of the process, resulting in a considerable build time improvement. (build times will vary based on project size) Here you can see that the build time is still 3 minutes, but since the translation is done as a post-processing step, we only incur that build cost once. Also the post-processing of the translations is very fast since the tool only has to parse the code for $localize tagged strings. In this case around 5 seconds. The result is that the total build time for 9 locales is now 3 minutes + ( 9 x 5 seconds) = 3 minutes 45 seconds. Compared to 27 minutes for the pre-Ivy translated builds. Similar improvements have been seen in real life by teams already using this approach: The post-processing of translations is already built into the Angular CLI and if you have configured your projects according to our i18n guide you should already be benefitting from these faster build times. Currently the use of $localize in application code is not yet publicly supported or documented. We will be working on making this fully supported in the coming months. It requires new message extraction tooling — the current (pre-Ivy) message extractor does not find $localize text in application code. This is being integrated into the CLI now and should be released as part of 10.1.0. We are also looking into how we can better support translations in 3rd party libraries using this new approach. Since this would affect the Angular Package Format (APF) we expect to run a Request for Comment (RFC) before implementing that. In the meantime, enjoy the improved build times and keep an eye out for full support of application level localization of text. Angular localization with Ivy was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 19802

article-image-liz-fong-jones-reveals-she-is-leaving-google-in-february
Richard Gall
03 Jan 2019
2 min read
Save for later

Liz Fong-Jones reveals she is leaving Google in February

Richard Gall
03 Jan 2019
2 min read
Liz Fong-Jones has been a key figure in the politicization of Silicon Valley over the last 18 months. But the Developer Advocate at Google Cloud Platform revealed today (3rd January 2018) that she is to leave the company in February, citing Google's lack of leadership in response to the demands made by employees during the Google walkout in November 2018. Fong-Jones hinted that she had found another role before Christmas, writing on Twitter that she had found a new job: https://twitter.com/lizthegrey/status/1075837650433646593 That was confirmed today when Fong-Jones tweeted "Resignation letter is in. February 25 is my last day." Her new role hasn't yet been revealed, but it appears that she will be remain within SRE. She told one follower that she will likely be at SRECon in Dublin later in the year. https://twitter.com/lizthegrey/status/1080837397347221505 She made it clear that she had no issue with her team, stating that her decision to leave was instead "a reflection on what Google's become over the 11 years I've worked there." Why Liz Fong-Jones exit from Google is important Fong-Jones exit from Google doesn't reflect well on the company. If anything, it only serves to highlight the company's stubbornness. Despite months to respond to serious allegations of sexual harassment and systemic discrimination, there appears to be a refusal to acknowledge problems, let alone find a way forward to tackle them. From Fong-Jones perspective, it the move is probably as much pragmatic as it is symbolic. She spoke on Twitter of "burnout" at "doing what has to be done, as second shift work." https://twitter.com/lizthegrey/status/1080848586135560192 While there are clearly personal reasons for Fong-Jones to leave Google, because of her importance as a figure in conversations around tech worker rights and diversity, her exit will have significant symbolic power. It's likely that she'll continue to play an important part in helping tech workers - in Silicon Valley and elsewhere - organize for a better future, even as she aims to do "more of what you want to do".
Read more
  • 0
  • 0
  • 19752
Modal Close icon
Modal Close icon