Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-microsoft-cloud-services-gdpr
Vijin Boricha
25 Apr 2018
2 min read
Save for later

Microsoft Cloud Services get GDPR Enhancements

Vijin Boricha
25 Apr 2018
2 min read
With the GDPR deadline looming closer everyday, Microsoft has started to apply General Data Protection Regulation (GDPR) to its cloud services. Microsoft recently announced that they are providing some enhancements to help organizations using Azure and Office 365 services meet GDPR requirements. With these improvements they aim at ensuring that both Microsoft's services and the organizations benefiting from them will be GDPR-compliant by the law's enforcement date. Microsoft tools supporting GDPR compliance are as follows: Service Trust Portal, provides GDPR information resources Security and Compliance Center in the Office 365 Admin Center Office 365 Advanced Data Governance for classifying data Azure Information Protection for tracking and revoking documents Compliance Manager for keeping track of regulatory compliance Azure Active Directory Terms of Use for obtaining user informed consent Microsoft recently released a preview of a new Data Subject Access Request interface in the Security and Compliance Center and the Azure Portal via a new tab. According to Microsoft 365 team, this interface is also available in the Service Trust Portal. Microsoft Tech Community post also claims that the portal will be getting a "Data Protection Impacts Assessments" section in the coming weeks. Organizations can now perform a search for "relevant data across Office 365 locations" with the new Data Subject Access Request interface preview. This helps organizations search across Exchange, SharePoint, OneDrive, Groups and Microsoft Teams. As explained by Microsoft, once searched the data is exported for review prior to being transferred to the requestor. According to Microsoft, the Data Subject Access Request capabilities will be out of preview before the GDPR deadline of May 25th. It also claims that IT professionals will be able to execute DSRs (Data Subject Requests) against system-generated logs. To know more in detail you can visit Microsoft’s blog post.
Read more
  • 0
  • 0
  • 20125

article-image-redhats-quarkus-announces-plans-for-quarkus-1-0-releases-its-rc1
Vincy Davis
11 Nov 2019
3 min read
Save for later

Red Hat’s Quarkus announces plans for Quarkus 1.0, releases its rc1 

Vincy Davis
11 Nov 2019
3 min read
Update: On 25th November, the Quarkus team announced the release of Quarkus 1.0.0.Final bits. Head over to the Quarkus blog for more details on the official announcement. Last week, RedHat’s Quarkus, the Kubernetes native Java framework for GraalVM & OpenJDK HotSpot announced the availability of its first release candidate. It also notified users that its first stable version will be released by the end of this month. Launched in March this year, Quarkus framework uses Java libraries and standards to provide an effective solution for running Java on new deployment environments like serverless, microservices, containers, Kubernetes, and more. Java developers can employ this framework to build apps with faster startup time and less memory than traditional Java-based microservices frameworks. It also provides flexible and easy to use APIs that can help developers to build cloud-native apps, and best-of-breed frameworks. “The community has worked really hard to up the quality of Quarkus in the last few weeks: bug fixes, documentation improvements, new extensions and above all upping the standards for developer experience,” states the Quarkus team. Latest updates added in Quarkus 1.0 A new reactive core based on Vert.x with support for reactive and imperative programming models. This feature aims to make reactive programming a first-class feature of Quarkus. A new non-blocking security layer that allows reactive authentications and authorization. It also enables reactive security operations to integrate with Vert.x. Improved Spring API compatibility, including Spring Web and Spring Data JPA, as well as Spring DI. A Quarkus ecosystem also called as “universe”, is a set of extensions that fully supports native compilation via GraalVM native image. It supports Java 8, 11 and 13 when using Quarkus on the JVM. It will also support Java 11 native compilation in the near future. RedHat says, “Looking ahead, the community is focused on adding additional extensions like enhanced Spring API compatibility, improved observability, and support for long-running transactions.” Many users are excited about Quarkus and are looking forward to trying the stable version. https://twitter.com/zemiak/status/1192125163472637952 https://twitter.com/loicrouchon/status/1192206531045085186 https://twitter.com/lasombra_br/status/1192114234349563905 How Quarkus brings Java into the modern world of enterprise tech Apple shares tentative goals for WebKit 2020 Apple introduces Swift Numerics to support numerical computing in Swift Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more Fastly announces the next-gen edge computing services available in private beta
Read more
  • 0
  • 0
  • 20073

article-image-elastic-launches-helm-charts-alpha-for-faster-deployment-of-elasticsearch-and-kibana-to-kubernetes
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes

Melisha Dsouza
12 Dec 2018
3 min read
At the KubeCon+CloudNativeCon happening at Seattle this week, Elastic N.V., the pioneer behind Elasticsearch and the Elastic Stack, announced the alpha availability of Helm Charts for Elasticsearch on Kubernetes. Helm Charts will make it possible to deploy Elasticsearch and Kibana to Kubernetes almost instantly. Developers use Helm charts for its flexibility in creating, publishing and sharing Kubernetes applications. The ease of using Kubernetes to manage containerized workloads has also lead to Elastic users deploying their ElasticSearch workloads to Kubernetes. Now, with the Helm chart support provided for Elasticsearch on Kubernetes, developers can harness the benefits of both, Helm charts and Kubernetes, to instal, configure, upgrade and run their applications on Kubernetes. With this new functionality in place, users can now take advantage of the best practices and templates to deploy Elasticsearch and Kibana. They will obtain access to some basic free features like monitoring, Kibana Canvas and spaces. According to the blog post, Helm charts will serve as a “ way to help enable Elastic users to run the Elastic Stack using modern, cloud-native deployment models and technologies.” Why should developers consider Helm charts? Helm charts have been known to provide users with the ability to leverage Kubernetes packages through the click of a button or single CLI command. Kubernetes is sometimes complex to use, thus impairing developer productivity. Helm charts improve their productivity as follows: With helm charts, developers can focus on developing applications rather than  deploying dev-test environments. They can author their own chart, which in turn automates deployment of their dev-test environment It comes with a “push button” deployment and deletion of apps, making adoption and development of Kubernetes apps easier for those with little container or microservices experience. Combating the complexity related of deploying a Kubernetes-orchestrated container application, Helm Charts allows software vendors and developers to preconfigure their applications with sensible defaults. This enables users/deployers to change parameters of the application/chart using a consistent interface. Developers can incorporate production-ready packages while building applications in a Kubernetes environment thus eliminating deployment errors due to incorrect configuration file entries or mangled deployment recipes. Deploying and maintaining Kubernetes applications can be tedious and error prone. Helm Charts reduces the complexity of maintaining an App Catalog in a Kubernetes environment. Central App Catalog reduces duplication of charts (when shared within or between organizations) and spreads best practices by encoding them into Charts. To know more about Helm charts, check out the README files for the Elasticsearch and Kibana charts available on GitHub. In addition to this announcement, Elastic also announced its collaboration with Cloud Native Computing Foundation (CNCF) to promote and support open cloud native technologies and companies. This is another step towards Elastic’s mission towards building products in an open and transparent way. You can head over to Elastic’s official blog for an in-depth coverage of this news. Alternatively, check out MarketWatch for more insights on this article. Dejavu 2.0, the open source browser by ElasticSearch, now lets you build search UIs visually Elasticsearch 6.5 is here with cross-cluster replication and JDK 11 support How to perform Numeric Metric Aggregations with Elasticsearch
Read more
  • 0
  • 0
  • 20011

article-image-new-amazon-rds-on-graviton2-processors-from-aws-news-blog
Matthew Emerick
15 Oct 2020
3 min read
Save for later

New – Amazon RDS on Graviton2 Processors from AWS News Blog

Matthew Emerick
15 Oct 2020
3 min read
I recently wrote a post to announce the availability of M6g, R6g and C6g families of instances on Amazon Elastic Compute Cloud (EC2). These instances offer better cost-performance ratio than their x86 counterparts. They are based on AWS-designed AWS Graviton2 processors, utilizing 64-bit Arm Neoverse N1 cores. Starting today, you can also benefit from better cost-performance for your Amazon Relational Database Service (RDS) databases, compared to the previous M5 and R5 generation of database instance types, with the availability of AWS Graviton2 processors for RDS. You can choose between M6g and R6g instance families and three database engines (MySQL 8.0.17 and higher, MariaDB 10.4.13 and higher, and PostgreSQL 12.3 and higher). M6g instances are ideal for general purpose workloads. R6g instances offer 50% more memory than their M6g counterparts and are ideal for memory intensive workloads, such as Big Data analytics. Graviton2 instances provide up to 35% performance improvement and up to 52% price-performance improvement for RDS open source databases, based on internal testing of workloads with varying characteristics of compute and memory requirements. Graviton2 instances family includes several new performance optimizations such as larger L1 and L2 caches per core, higher Amazon Elastic Block Store (EBS) throughput than comparable x86 instances, fully encrypted RAM, and many others as detailed on this page. You can benefit from these optimizations with minimal effort, by provisioning or migrating your RDS instances today. RDS instances are available in multiple configurations, starting with 2 vCPUs, with 8 GiB memory for M6g, and 16 GiB memory for R6g with up to 10 Gbps of network bandwidth, giving you new entry-level general purpose and memory optimized instances. The table below shows the list of instance sizes available for you: Instance Size vCPU Memory (GiB) Dedicated EBS Bandwidth (Mbps) Network Bandwidth (Gbps) M6g R6g large 2 8 16 Up to 4750 Up to 10 xlarge 4 16 32 Up to 4750 Up to 10 2xlarge 8 32 64 Up to 4750 Up to 10 4xlarge 16 64 128 4750 Up to 10 8xlarge 32 128 256 9000 12 12xlarge 48 192 384 13500 20 16xlarge 64 256 512 19000 25 Let’s Start Your First Graviton2 Based Instance To start a new RDS instance, I use the AWS Management Console or the AWS Command Line Interface (CLI), just like usual, and select one of the db.m6g or db.r6ginstance types (this page in the documentation has all the details). Using the CLI, it would be: aws rds create-db-instance --region us-west-2 --db-instance-identifier $DB_INSTANCE_NAME --db-instance-class db.m6g.large --engine postgres --engine-version 12.3 --allocated-storage 20 --master-username $MASTER_USER --master-user-password $MASTER_PASSWORD The CLI confirms with: { "DBInstance": { "DBInstanceIdentifier": "newsblog", "DBInstanceClass": "db.m6g.large", "Engine": "postgres", "DBInstanceStatus": "creating", ... } Migrating to Graviton2 instances is easy, in the AWS Management Console, I select my database and I click Modify. The I select the new DB instance class: Or, using the CLI, I can use the modify-db-instance API call. There is a short service interruption happening when you switch instance type. By default, the modification will happen during your next maintenance window, unless you enable the ApplyImmediately option. You can provision new or migrate to Graviton2 Amazon Relational Database Service (RDS) instances in all regions where EC2 M6g and R6g are available : US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Frankfurt) AWS Regions. As usual, let us know your feedback on the AWS Forum or through your usual AWS contact. -- seb
Read more
  • 0
  • 0
  • 19941

article-image-atlassian-acquires-opsgenie-launches-jira-ops-to-make-incident-response-more-powerful
Bhagyashree R
05 Sep 2018
2 min read
Save for later

Atlassian acquires OpsGenie, launches Jira Ops to make incident response more powerful

Bhagyashree R
05 Sep 2018
2 min read
Yesterday, Atlassian made two major announcements, the acquisition of OpsGenie and the release of Jira Ops. Both these products aim to help IT operations teams resolve downtime quickly and reduce the occurrence of these incidents over time. Atlassian is an Australian enterprise software company that develops collaboration software for teams with products including JIRA, Confluence, HipChat, Bitbucket, and Stash. OpsGenie: Alert the right people at the right time Source: Atlassian OpsGenie is an IT alert and notification management tool that helps notify critical alerts to all the right people (operations and software development teams). It uses a sophisticated combination of scheduling, escalation paths, and notifications that take things like time zone and holidays into account. OpsGenie is a prompt and reliable alerting system, which comes with the following features: It is integrated with monitoring, ticketing, and chat tools, to notify the team using multiple channels, providing the necessary information for your team to immediately begin resolution. It provides various notification methods such as, email, SMS, push, phone call, and group chat to ensure alerts are seen by the users. You can build and modify schedules and define escalation rules within one interface. It tracks everything related to alerts and incidents, which helps you to gain insight into areas of success and opportunities for improvement. You can define escalation policies and on-call schedules with rotations to notify the right people and escalate when necessary. Jira Ops: Resolve incidents faster Source: Atlassian Jira Ops is an unified incident command center that provides the response team with a single place for response coordination. It is integrated with OpsGenie, Slack, Statuspage, PagerDuty, and xMatters. It guides the response team through the response workflow and automates common steps such as creating a new Slack room for each incident. Jira Ops is available through Atlassian’s early access program. Jira Ops enables you to resolve a downtime quickly by providing the following functionalities: It quickly alerts you about what is affected and what the associated impacts are. You can check the status, severity level, and duration of the incident. You can see real-time response activities. You can also find the associated Slack channel, current incident manager, and technical lead. You can find more details on OpsGenie and Jira Ops on Atlassian’s official website. Atlassian sells Hipchat IP to Slack Atlassian open sources Escalator, a Kubernetes autoscaler project Docker isn’t going anywhere
Read more
  • 0
  • 0
  • 19921

article-image-gremlin-makes-chaos-engineering-with-docker-easier-with-new-container-discovery-feature
Richard Gall
28 Aug 2018
3 min read
Save for later

Gremlin makes chaos engineering with Docker easier with new container discovery feature

Richard Gall
28 Aug 2018
3 min read
Gremlin, the product that's bringing chaos engineering to a huge range of organizations, announced today that it has added a new feature to its product: container discovery. Container discovery will make it easier to run chaos engineering tests alongside Docker. Chaos engineering and containers have always been closely related - arguably the loosely coupled architectural style of modern software driven by containers has, in turn, led to an increased demand for chaos engineering to improve software resiliency. Matt Fornaciari, Gremlin CTO, explains that "with today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production." Read next: How Gremlin is making chaos engineering accessible [Interview] What does Gremlin's new container discovery feature do? Container discovery will do 2 things: it will make it easier for engineers to identify specific Docker containers, but more importantly, it will also allow them to simulate attacks or errors within those containerized environments. The real benefit of this is that it makes the testing process so much easier for engineers. Containers are, the press release notes, "often highly dynamic, ephemeral, and difficult to pinpoint at a given moment," which means identifying and isolating a particular container to run a 'chaos test' on can ordinarily be very challenging and time consuming. Gremlin has been working with the engineering team at Under Armour. Paul Osman, Senior Engineering Manager says that "we use Gremlin to test various failure scenarios and build confidence in the resiliency of our microservices." This new feature could save the engineers a lot of time, as he explains: "the ability to target containerized services with an easy-to-use UI has reduced the amount of time it takes us to do fault injection significantly." Read next: Chaos Engineering: managing complexity by breaking things Why is Docker such a big deal for Gremlin? As noted above, chaos engineering and containers are part of the same wider shift in software architectural styles. With Docker leading the way when it comes to containerization - its market share growing healthily - making it easier to perform resiliency tests on containers is incredibly important for the product. It's not a stretch to say that Gremlin have probably been working on this feature for some time, with users placing it high on their list of must-haves. Chaos engineering is still in its infancy - this year's Skill Up report found that it still remains on the periphery of many developer's awareness. However, that could quickly change, and it appears that Gremlin are working hard to make chaos engineering not only more accessible but also more appealing to companies for whom software resiliency is essential.
Read more
  • 0
  • 0
  • 19876
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-grafana-labs-announces-general-availability-of-loki-1-0-a-multi-tenant-log-aggregation-system
Savia Lobo
20 Nov 2019
3 min read
Save for later

Grafana Labs announces general availability of Loki 1.0, a multi-tenant log aggregation system

Savia Lobo
20 Nov 2019
3 min read
Today, at the ongoing KubeCon 2019, Grafana Labs, an open source analytics and monitoring solution provider, announced that Loki version 1.0 is generally available for production use. Loki is an open source logging platform that provides developers with an easy-to-use, highly efficient and cost-effective approach to log aggregation. The Loki project was first introduced at KubeCon Seattle in 2018. Before the official launch, this project was started inside of Grafana Labs and was internally used to monitor all of Grafana Labs’ infrastructure. It helped ingest around 1.5TB/10 billion log lines a day. Released under the Apache 2.0 license, the Loki tool is optimized for Grafana, Kubernetes, and Prometheus. Just within a year, the project has more than 1,000 contributions from 137 contributors and also has nearly 8,000 stars on GitHub. With Loki 1.0, users can instantaneously switch between metrics and logs, preserving context and reducing MTTR. By storing compressed, unstructured logs and only indexing metadata, Loki is cost-effective and simple to operate by design. It includes a set of components that can be composed into a fully-featured logging stack. Grafana Cloud offers a high-performance, hosted Loki service that allows users to store all logs together in a single place with usage-based pricing. Loki’s design is inspired by Prometheus, the open source monitoring solution for the cloud-native ecosystem, as it offers a Prometheus-like query language called LogQL to further integrate with the cloud-native ecosystem. Tom Wilkie, VP of Product at Grafana Labs, said, “Grafana Labs is proud to have created Loki and fostered the development of the project, building first-class support for Loki into Grafana and ensuring customers receive the support and features they need.” He further added, “We are committed to delivering an open and composable observability platform, of which Loki is a key component, and continue to rely on the power of open source and our community to enhance observability into application and infrastructure.” Grafana Labs also offers enterprise services and support for Loki, which includes: Support and training from Loki maintainers and experts 24 x 7 x 365 coverage from the geographically distributed Grafana team Per-node pricing that scales with deployment Read more about Grafana Loki in detail on GitHub. “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more! Grafana 6.2 released with improved security, enhanced provisioning, Bar Gauge panel, lazy loading and more
Read more
  • 0
  • 0
  • 19856

article-image-new-redis-6-compatibility-for-amazon-elasticache-from-aws-news-blog
Matthew Emerick
07 Oct 2020
5 min read
Save for later

New – Redis 6 Compatibility for Amazon ElastiCache from AWS News Blog

Matthew Emerick
07 Oct 2020
5 min read
After the last Redis 5.0 compatibility for Amazon ElastiCache, there has been lots of improvements to Amazon ElastiCache for Redis including upstream supports such as 5.0.6. Earlier this year, we announced Global Datastore for Redis that lets you replicate a cluster in one region to clusters in up to two other regions. Recently we improved your ability to monitor your Redis fleet by enabling 18 additional engine and node-level CloudWatch metrics. Also, we added support for resource-level permission policies, allowing you to assign AWS Identity and Access Management (IAM) principal permissions to specific ElastiCache resource or resources. Today, I am happy to announce Redis 6 compatibility to Amazon ElastiCache for Redis. This release brings several new and important features to Amazon ElastiCache for Redis: Managed Role-Based Access Control – Amazon ElastiCache for Redis 6 now provides you with the ability to create and manage users and user groups that can be used to set up Role-Based Access Control (RBAC) for Redis commands. You can now simplify your architecture while maintaining security boundaries by having several applications use the same Redis cluster without being able to access each other’s data. You can also take advantage of granular access control and authorization to create administration and read-only user groups. Amazon ElastiCache enhances the new Access Control Lists (ACL) introduced in open source Redis 6 to provide a managed RBAC experience, making it easy to set up access control across several Amazon ElastiCache for Redis clusters. Client-Side Caching – Amazon ElastiCache for Redis 6 comes with server-side enhancements to deliver efficient client-side caching to further improve your application performance. Redis clusters now support client-side caching by tracking client requests and sending invalidation messages for data stored on the client. In addition, you can also take advantage of a broadcast mode that allows clients to subscribe to a set of notifications from Redis clusters. Significant Operational Improvements – This release also includes several enhancements that improve application availability and reliability. Specifically, Amazon ElastiCache has improved replication under low memory conditions, especially for workloads with medium/large sized keys, by reducing latency and the time it takes to perform snapshots. Open source Redis enhancements include improvements to expiry algorithm for faster eviction of expired keys and various bug fixes. Note that open source Redis 6 also announced support for encryption-in-transit, a capability that is already available in Amazon ElastiCache for Redis 4.0.10 onwards. This release of Amazon ElastiCache for Redis 6 does not impact Amazon ElastiCache for Redis’ existing support for encryption-in-transit. In order to apply RBAC to a new or existing Redis 6 cluster, we first need to ensure you have a user and user group created. We’ll review the process to do this below. Using Role-Based Access Control – How it works An alternative to Authenticating Users with the Redis AUTH Command, Amazon ElastiCache for Redis 6 offers Role-Based Access Control (RBAC). With RBAC, you create users and assign them specific permissions via an Access String. If you want to create, modify, and delete users and user groups, you will need to select to the User Management and User Group Management sections in the ElastiCache console. ElastiCache will automatically configure a default user with user ID and user name “default”, and then you can add it or new created users to new groups in User Group Management. If you want to change the default user with your own password and access setting, you need to create a new user with the username set to “default” and can then swap it with the original default user. We recommend using your own strong password for a default user. The following example shows how to swap the original default user with another default that has a modified access string via AWS CLI. $ aws elasticache create-user --user-id "new-default-user" --user-name "default" --engine "REDIS" --passwords "a-str0ng-pa))word" --access-string "off +get ~keys*" Create a user group and add the user you created previously. $ aws elasticache create-user-group --user-group-id "new-default-group" --engine "REDIS" --user-ids "default" Swap the new default user with the original default user. $ aws elasticache modify-user-group --user-group-id "new-default-group" --user-ids-to-add "new-default-user" --user-ids-to-remove "default" Also, you can modify a user’s password or change its access permissions using modify-user command, or remove a specific user using delete-user command. It will be removed from any user groups to which it belongs. Similarly you can modify a user group by adding new users and/or removing current users using modify-user-group command, or delete a user group using delete-user-group command. Note that the user group itself, not the users belonging to the group, will be deleted. Once you have created a user group and added users, you can assign the user group to a replication group, or migrate between Redis AUTH and RBAC. For more information, see the documentation in detail. Redis 6 cluster for ElastiCache – Getting Started As usual, you can use the ElastiCache Console, CLI, APIs, or a CloudFormation template to create to new Redis 6 cluster. I’ll use the Console, choose Redis from the navigation pane and click Create with the following settings: Select “Encryption in-transit” checkbox to ensure you can see the “Access Control” options. You can select an option of Access Control either User Group Access Control List by RBAC features or Redis AUTH default user. If you select RBAC, you can choose one of the available user groups. My cluster is up and running within minutes. You can also use the in-place upgrade feature on existing cluster. By selecting the cluster, click Action and Modify. You can change the Engine Version from 5.0.6-compatible engine to 6.x. Now Available Amazon ElastiCache for Redis 6 is now available in all AWS regions. For a list of ElastiCache for Redis supported versions, refer to the documentation. Please send us feedback either in the AWS forum for Amazon ElastiCache or through AWS support, or your account team. – Channy;
Read more
  • 0
  • 0
  • 19823

article-image-liz-fong-jones-reveals-she-is-leaving-google-in-february
Richard Gall
03 Jan 2019
2 min read
Save for later

Liz Fong-Jones reveals she is leaving Google in February

Richard Gall
03 Jan 2019
2 min read
Liz Fong-Jones has been a key figure in the politicization of Silicon Valley over the last 18 months. But the Developer Advocate at Google Cloud Platform revealed today (3rd January 2018) that she is to leave the company in February, citing Google's lack of leadership in response to the demands made by employees during the Google walkout in November 2018. Fong-Jones hinted that she had found another role before Christmas, writing on Twitter that she had found a new job: https://twitter.com/lizthegrey/status/1075837650433646593 That was confirmed today when Fong-Jones tweeted "Resignation letter is in. February 25 is my last day." Her new role hasn't yet been revealed, but it appears that she will be remain within SRE. She told one follower that she will likely be at SRECon in Dublin later in the year. https://twitter.com/lizthegrey/status/1080837397347221505 She made it clear that she had no issue with her team, stating that her decision to leave was instead "a reflection on what Google's become over the 11 years I've worked there." Why Liz Fong-Jones exit from Google is important Fong-Jones exit from Google doesn't reflect well on the company. If anything, it only serves to highlight the company's stubbornness. Despite months to respond to serious allegations of sexual harassment and systemic discrimination, there appears to be a refusal to acknowledge problems, let alone find a way forward to tackle them. From Fong-Jones perspective, it the move is probably as much pragmatic as it is symbolic. She spoke on Twitter of "burnout" at "doing what has to be done, as second shift work." https://twitter.com/lizthegrey/status/1080848586135560192 While there are clearly personal reasons for Fong-Jones to leave Google, because of her importance as a figure in conversations around tech worker rights and diversity, her exit will have significant symbolic power. It's likely that she'll continue to play an important part in helping tech workers - in Silicon Valley and elsewhere - organize for a better future, even as she aims to do "more of what you want to do".
Read more
  • 0
  • 0
  • 19752

article-image-kubernetes-1-11-is-here
Vijin Boricha
28 Jun 2018
3 min read
Save for later

Kubernetes 1.11 is here!

Vijin Boricha
28 Jun 2018
3 min read
This is the second release of Kubernetes in 2018. Kubernetes 1.11 comes with significant updates on features that revolve around maturity, scalability, and flexibility of Kubernetes.This newest version comes with storage and networking enhancements with which it is possible to plug-in any kind of infrastructure (Cloud or on-premise), into the Kubernetes system. Now let's dive into the key aspects of this release: IPVS-Based In-Cluster Service Load Balancing Promotes to General Availability IPVS consist of a simpler programming interface than iptable and delivers high-performance in-kernel load balancing. In this release it has moved to general availability where is provides better network throughput, programming latency, and scalability limits. It is not yet the default option but clusters can use it for production traffic. CoreDNS Graduates to General Availability CoreDNS has moved to general availability and is now the default option when using kubeadm. It is a flexible DNS server that directly integrates with the Kubernetes API. In comparison to the previous DNS server CoreDNS has lesser moving pasts as it is a single process that creates custom DNS entries to supports flexible uses cases. CoreDNS is also memory-safe as it is written in Go. Dynamic Kubelet Configuration Moves to Beta It has always been difficult to update Kubelet configurations in a running cluster as Kubelets are configured through command-line flags. With this feature moving to Beta, one can configure Kubelets in a live cluster through the API server. CSI enhancements Over the past few releases CSI (Container Storage Interface) has been a major focus area. This service was moved to Beta in version 1.10. In this version, the Kubernetes team continues to enhance CSI with a number of new features such as: Alpha support for raw block volumes to CSI Integrates CSI with the new kubelet plugin registration mechanism Easier to pass secrets to CSI plugins Enhanced Storage Features This release introduces online resizing of Persistent Volumes as an alpha feature. With this feature users can increase the PVs size without terminating pods or unmounting the volume. Users can update the PVC to request a new size and kubelet can resize the file system for the PVC. Dynamic maximum volume count is introduced as an alpha feature. With this new feature one can enable in-tree volume plugins to specify the number of volumes to be attached to a node, allowing the limit to vary based on the node type. In the earlier version the limits were configured through an environment variable. StorageObjectInUseProtection feature is now stable and prevents issues from deleting a Persistent Volume or a Persistent Volume Claim that is integrated to an active pod. You can know more about Kubernetes 1.11 from Kubernetes Blog and this version is available for download on GitHub. To get started with Kubernetes, check out our following books: Learning Kubernetes [Video] Kubernetes Cookbook - Second Edition Mastering Kubernetes - Second Edition Related Links VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Rackspace now supports Kubernetes-as-a-Service Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads
Read more
  • 0
  • 0
  • 19608
article-image-microsoft-announces-azure-devops-makes-azure-pipelines-available-on-github-marketplace
Melisha Dsouza
11 Sep 2018
4 min read
Save for later

Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace

Melisha Dsouza
11 Sep 2018
4 min read
Microsoft is rebranding Visual Studio Team Services(VSTS) to Azure DevOps along with  Azure DevOps Server, the successor of Team Foundation Server (TFS). Microsoft understands that DevOps has become increasingly critical to a team’s success. The re-branding is done to achieve the aim of shipping higher quality software in a short span of time. Azure DevOps supports both public and private cloud configurations. The services are open and extensible and designed to work with any type of application, framework, platform, or cloud. Since Azure DevOps services work great together, users can gain more control over their projects. Azure DevOps is free for open source projects and small projects including up to five users. For larger teams, the cost ranges from $30 per month to $6,150 per month, depending upon the number of users. VSTS users will be upgraded into Azure DevOps projects automatically without any loss of functionally. URLs will be changed from abc.visualstudio.com to dev.azure.com/abc. Redirects from visualstudio.com URLs will be supported to avoid broken links. New users will get the update starting 10th September 2018, and existing users can expect the update in coming months. Key features in Azure DevOps: #1 Azure Boards Users can keep track of their work at every development stage with Kanban boards, backlogs, team dashboards, and custom reporting. Built-in scrum boards and planning tools help in planning meetings while gaining new insights into the health and status of projects with powerful analytics tools. #2 Azure Artifacts Users can easily manage Maven, npm, and NuGet package feeds from public and private sources. Code storing and sharing across small teams and large enterprises is now efficient thanks to Azure Artifacts. Users can Share packages, and use built-in CI/CD, versioning, and testing. They can easily access all their artifacts in builds and releases. #3 Azure Repos Users can enjoy unlimited cloud-hosted private Git repos for their projects.  They can securely connect with and push code into their Git repos from any IDE, editor, or Git client. Code-aware searches help them find what they are looking for. They can perform effective Git code reviews and use forks to promote collaboration with inner source workflows. Azure repos help users maintain a high code quality by requiring code reviewer sign off, successful builds, and passing tests before pull requests can be merged. #4 Azure Test Plans Users can improve their code quality using planned and exploratory testing services for their apps. These Test plans help users in capturing rich scenario data, testing their application and taking advantage of end-to-end traceability. #5 Azure Pipelines There’s more in store for VSTS users. For a seamless developer experience, Azure Pipelines is also now available in the GitHub Marketplace. Users can easily configure a CI/CD pipeline for any Azure application using their preferred language and framework. These Pipelines can be built and deployed with ease. They provide users with status reports, annotated code, and detailed information on changes to the repo within the GitHub interface. The pipelines Work with any platform- like Azure, Amazon Web Services, and Google Cloud Platform. They can run on apps with operating systems, including Android, iOS, Linux, macOS, and Windows systems. The Pipelines are free for open source projects. Microsoft has tried to update user experience by introducing these upgrades. Are you excited yet? You can learn more at the Microsoft live Azure DevOps keynote today at 8:00 a.m. Pacific and a workshop with Q&A on September 17 at 8:30 a.m. Pacific on Microsoft’s events page. You can read all the details of the announcement on Microsoft’s official Blog. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence 8 ways Artificial Intelligence can improve DevOps  
Read more
  • 0
  • 0
  • 19585

article-image-microsoft-releases-procdump-for-linux-a-linux-version-of-the-procdump-sysinternals-tool
Savia Lobo
05 Nov 2018
2 min read
Save for later

Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool

Savia Lobo
05 Nov 2018
2 min read
Microsoft developer, David Fowler revealed ‘ProcDump for Linux’, a Linux version of the ProcDump Sysinternals tool, over the weekend on November 3. ProcDump is a Linux reimagining of the classic ProcDump tool from the Sysinternals suite of tools for Windows. It provides a convenient way for Linux developers to create core dumps of their application based on performance triggers. Requirements for ProcDump The tool currently supports Red Hat Enterprise Linux / CentOS 7, Fedora 26, Mageia 6 and Ubuntu 14.04 LTS, with other versions being tested. It also supports gdb >= 7.6.1 and zlib (build-time only). Limitations of ProcDump Runs on Linux Kernels version 3.5+ Does not have full feature parity with Windows version of ProcDump, specifically, stay alive functionality, and custom performance counters Installing ProcDump ProcDump can be installed using two methods, first, Package Manager, which is a preferred method. The other one is via.deb package. To know more about ProcDump in detail visit its GitHub page. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report ‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers
Read more
  • 0
  • 0
  • 19558

article-image-three-ways-serverless-apis-can-accelerate-enterprise-innovation-from-microsoft-azure-blog-announcements
Matthew Emerick
07 Oct 2020
5 min read
Save for later

Three ways serverless APIs can accelerate enterprise innovation from Microsoft Azure Blog > Announcements

Matthew Emerick
07 Oct 2020
5 min read
With the wrong architecture, APIs can be a bottleneck to not only your applications but to your entire business. Bottlenecks such as downtime, low performance, or high application complexity, can result in exaggerated infrastructure and organizational costs and lost revenue. Serverless APIs mitigate these bottlenecks with autoscaling capabilities and consumption-based pricing models. Once you start thinking of serverless as not only a remover-of-bottlenecks but also as an enabler-of-business, layers of your application infrastructure become a source of new opportunities. This is especially true of the API layer, as APIs can be productized to scale your business, attract new customers, or offer new services to existing customers, in addition to its traditional role as the communicator between software services. Given the increasing dominance of APIs and API-first architectures, companies and developers are gravitating towards serverless platforms to host APIs and API-first applications to realize these benefits. One serverless compute option to host API’s is Azure Functions, event-triggered code that can scale on-demand, and you only pay for what you use. Gartner predicts that 50 percent of global enterprises will have deployed a serverless functions platform by 2025, up from only 20 percent today. You can publish Azure Functions through API Management to secure, transform, maintain, and monitor your serverless APIs. Faster time to market Modernizing your application stack to run microservices on a serverless platform decreases internal complexity and reduces the time it takes to develop new features or products. Each serverless function implements a microservice. By adding many functions to a single API Management product, you can build those microservices into an integrated distributed application. Once the application is built, you can use API Management policies to implement caching or ensure security requirements. Quest Software uses Azure App Service to host microservices in Azure Functions. These support user capabilities such as registering new tenants and application functionality like communicating with other microservices or other Azure platform resources such as the Azure Cosmos DB managed NoSQL database service. “We’re taking advantage of technology built by Microsoft and released within Azure in order to go to market faster than we could on our own. On average, over the last three years of consuming Azure services, we’ve been able to get new capabilities to market 66 percent faster than we could in the past.” - Michael Tweddle, President and General Manager of Platform Management, Quest Quest also uses Azure API Management as an serverless API gateway for the Quest On Demand microservices that implement business logic with Azure Functions and to apply policies that control access, traffic, and security across microservices. Modernize your infrastructure Developers should be focusing on developing applications, not provisioning and managing infrastructure. API management provides a serverless API gateway that delivers a centralized, fully managed entry point for serverless backend services. It enables developers to publish, manage, secure, and analyze APIs on at global scale. Using serverless functions and API gateways together allows organizations to better optimize resources and stay focused on innovation. For example, a serverless function provides an API through which restaurants can adjust their local menus if they run out of an item. Chipotle turned to Azure to create a unified web experience from scratch, leveraging both Azure API Management and Azure Functions for critical parts of their infrastructure. Calls to back-end services (such as ordering, delivery, and account management and preferences) hit Azure API Management, which gives Chipotle a single, easily managed endpoint and API gateway into its various back-end services and systems. With such functionality, other development teams at Chipotle are able to work on modernizing the back-end services behind the gateway in a way that remains transparent to Smith’s front-end app. “API Management is great for ensuring consistency with our API interactions, enabling us to always know what exists where, behind a single URL,” says Smith. “There are lots of changes going on behind the API gateway, but we don’t need to worry about them.”- Mike Smith, Lead Software Developer, Chipotle Innovate with APIs Serverless APIs are used to either increase revenue, decrease cost, or improve business agility. As a result, technology becomes a key driver of business growth. Businesses can leverage artificial intelligence to analyze API calls to recognize patterns and predict future purchase behavior, thus optimizing the entire sales cycle. PwC AI turned to Azure Functions to create a scalable API for its regulatory obligation knowledge mining solution. It also uses Azure Cognitive Search to quickly surface predictions found by the solution, embedding years of experience into an AI model that easily identifies regulatory obligations within the text. “As we’re about to launch our ROI POC, I can see that Azure Functions is a value-add that saves us two to four weeks of work. It takes care of handling prediction requests for me. I also use it to extend the model to other PwC teams and clients. That’s how we can productionize our work with relative ease.”- Todd Morrill, PwC Machine Learning Scientist-Manager, PwC Quest Software, Chipotle, and PwC are just a few Microsoft Azure customers who are leveraging tools such as Azure Functions and Azure API Management to create an API architecture that ensures your API’s are monitored, managed, and secure. Rethinking your API approach to use serverless technologies will unlock new capabilities within your organization that are not limited by scale, cost, or operational resources. Get started immediately Learn about common serverless API architecture patterns at the Azure Architecture Center, where we provide high-level overviews and reference architectures for common patterns that leverage Azure Functions and Azure API Management, in addition to other Azure services. Reference architecture for a web application with a serverless API. 
Read more
  • 0
  • 0
  • 19543
article-image-docker-announces-collaboration-with-microsofts-net-at-dockercon-2019
Savia Lobo
02 May 2019
4 min read
Save for later

Docker announces collaboration with Microsoft’s .NET at DockerCon 2019

Savia Lobo
02 May 2019
4 min read
Using Docker and .NET together was brought up in the year 2017, where Microsoft explained the cons of using them together. Last year’s DockerCon update showed multiple .NET demos showing how one can use Docker for modern applications and for older applications that use traditional architectures. This eased users to containerize .NET applications using tools from both Microsoft and Docker. The team said that “most of their effort to improve the .NET Core Docker experience in the last year has been focused on .NET Core 3.0.” “This is the first release in which we’ve made substantive runtime changes to make CoreCLR much more efficient, honor Docker resource limits better by default, and offer more configuration for you to tweak”. Microsoft writes in one of their blog posts. The team also mentions that they are invested in making .NET Core a true container runtime. They look forward to hardening .NET’s runtime to make it container-aware and function efficiently in low-memory environments. Let’s have a look at the different advantages of bring Docker and .NET together Docker + .NET advantages Lesser memory allocation and fewer GC heaps by default With .NET 3.0, the team reduced the minimal generation 0 GC allocation budget to better align with modern processor cache sizes and cache hierarchy. With this, the initial allocation size, which was unnecessarily large, was significantly reduced without any perceivable loss of performance thus bringing in tens of percentage points of improvements. The team also mention about a new policy for determining how many GC heaps to create. This is most important on machines where a low memory limit is set, but no CPU limit is set on a machine with many CPU cores. The GC now reserves a memory segment with a minimum size of 16 MB per heap. This limits the number of heaps the GC will create. Both these changes result in lower memory usage by default and make the default .NET Core configuration better in more cases. Added PowerShell to .NET Core SDK container Images PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible: Write .NET Core application Dockerfiles with PowerShell syntax, for any OS. Write .NET Core application/library build logic that can be easily containerized. Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK. .NET Core Images now available via Microsoft Container Registry Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change: Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat. Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images. Platform matrix and support With .NET Core, we try to support a broad set of distros and versions. Following is the policy for each distro: Alpine: support tip and retain support for one quarter (3 months) after a new version is released. Currently, 3.9 is tip and the team will stop producing 3.8 images in a month or two. Debian: support one Debian version per latest .NET Core version. This is also the default Linux image used for a given multi-arch tag. For .NET Core 3.0, the team may publish Debian 10 based images. However, they may produce Debian 9 based images for .NET Core 2.1 and 2.2, and Debian 8 images for earlier .NET Core versions. Ubuntu:  support one Ubuntu version per the latest .NET Core version(18.04). Also, as the team gets to know of a new Ubuntu LTS versions, they will start supporting non-LTS Ubuntu versions a means of validating the new LTS versions. For Windows, they support the cross-product of Nano Server and .NET Core versions. ARM Architecture The team plans to add support for ARM64 on Linux with .NET Core 3.0, complementing the ARM32 and X64 support already in place. This will enable .NET Core to be used in even more environments. Apart from these advantages, the team has also added support for Docker Memory and CPU Limits. To know more about this partnership in detail, read Microsoft’s official blogpost. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 19539

article-image-red-hat-open-sources-project-quay-container-registry
Savia Lobo
13 Nov 2019
2 min read
Save for later

Red Hat open sources Project Quay container registry

Savia Lobo
13 Nov 2019
2 min read
Yesterday, Red Hat introduced the open source Project Quay container registry, which is the upstream project representing the code that powers Red Hat Quay and Quay.io. Open-sourced as a Red Hat commitment, Project Quay “represents the culmination of years of work around the Quay container registry since 2013 by CoreOS, and now Red Hat,” the official post reads. Red Hat Quay container image registry provides storage and enables users to build, distribute, and deploy containers. It will also help users to gain more security over their image repositories with automation, authentication, and authorization systems. It is compatible with most container environments and orchestration platforms and is also available as a hosted service or on-premises. Launched in 2013, Quay grew in popularity due to its focus on developer experience and highly responsive support and added capabilities such as image rollback and zero-downtime garbage collection. Quay was acquired by CoreOS in 2014 with a mission to secure the internet through automated operations. Shortly after the acquisition, the company released the on-premise offering of Quay, which is presently known as Red Hat Quay. The Quay team also created and integrated the Clair open source container security scanning project since 2015. It is directly built into Project Quay. Clair enables the container security scanning feature in Red Hat Quay, which helps users identify known vulnerabilities in their container registries. Open-sourced as part of Project Quay, both Quay, and Clair code bases will help cloud-native communities to lower the barrier to innovation around containers, helping them to make containers more secure and accessible. Project Quay contains a collection of open-source software licensed under Apache 2.0 and other open-source licenses. It follows an open-source governance model, with a maintainer committee. With an open community, Red Hat Quay and Quay.io users can benefit from being able to work together on the upstream code. Project Quay will be officially launched at the OpenShift Commons Gathering on November 18 in San Diego at KubeCon 2019. To know more about this announcement, you can read Red Hat’s official blog post. Red Hat announces CentOS Stream, a “developer-forward distribution” jointly with the CentOS Project Expanding Web Assembly beyond the browser with Bytecode Alliance, a Mozilla, Fastly, Intel and Red Hat partnership After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption
Read more
  • 0
  • 0
  • 19320
Modal Close icon
Modal Close icon