Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-openssh-7-8-released
Melisha Dsouza
27 Aug 2018
4 min read
Save for later

OpenSSH 7.8 released!

Melisha Dsouza
27 Aug 2018
4 min read
OpenSSH 7.8 base source code was released on August 24, 2018. It includes many new features such as a fix for the username enumeration vulnerability, changes to the default format for the private key file, and many more. Additionally, support for running ssh setuid root has been removed, and a couple of new signature algorithms have been added. The base source code is designed specifically for OpenBSD. The aim was to make the code simple, clean, minimal, and auditable. This release will be available from the mirrors listed at http://www.openssh.com/ shortly. Let’s take a look at the features that developers can expect in this new version of OpenSSH Changes that may affect existing configurations ssh-keygen(1): Write OpenSSH format private keys by default instead of using OpenSSL's PEM format. This offers better protection against offline password guessing and supports key comments in private keys. sshd(8): Internal support for S/Key multiple factor authentication is removed. S/Key may still be used via PAM or BSD auth. ssh(1): Vestigal support for running ssh(1) as setuid is removed. sshd(8): The semantics of PubkeyAcceptedKeyTypes and HostbasedAcceptedKeyTypes now specify signature algorithms that are accepted for their respective authentication mechanism. This matters when using the RSA/SHA2 signature algorithms "rsa-sha2-256", "rsa-sha2-512" and their certificate counterparts. Configurations that override these options but do not use these algorithm names may cause unexpected authentication failures. sshd(8): The precedence of session environment variables has changed. ~/.ssh/environment and environment="..." options in authorized_keys files can no longer override SSH_* variables set implicitly by sshd. ssh(1)/sshd(8): The default IPQoS used by ssh/sshd has changed.Interactive traffic will use  DSCP AF21and CS1 will be used  for bulk. For a detailed understanding, head over to the commit message: https://cvsweb.openbsd.org/src/usr.bin/ssh/readconf.c#rev1.28 What's new in OpenSSH 7.8 This  bugfix release has a couple of New Features in store for developers. Let’s take a look at some of the important ones. New signature algorithms "rsa-sha2-256-cert- v01@openssh.com" and "rsa-sha2-512-cert-v01@openssh.com" to  explicitly force use of RSA/SHA2 signatures in authentication. Read more at  ssh(1)/sshd(8). Some countermeasures are added against timing attacks used for account validation/enumeration. sshd will impart a minimum time or each failed authentication attempt consisting of a global 5ms minimum plus an additional per-user 0-4ms delay derived from a host secret. Fine more information at sshd(8). In sshd(8), you can add a SetEnv directive to explicitly specify environment variables in sshd_config by an administrator. Variables set by SetEnv override the default and client-specified Environment. In ssh(1), you can add a SetEnv directive to request that the server sets an environment variable in the session. Similar to the existing SendEnv option, these variables are set subject to server Configuration. Clear environment variables previously marked for sending to the server by "SendEnv -PATTERN" Bug Fixes introduced in this new version In the sshd(8), users can avoid observable differences in request parsing that could be used to determine whether a target user is valid. They can also fix failures to read authorized_keys caused by faulty supplemental group caching. Failures can be fixed to read authorized_keys caused by faulty supplemental group caching. The relax checking of authorized_keys environment="..." options to allow underscores in variable names  (regression introduced in 7.7) Some memory leaks in the ssh(1)/sshd(8) have been fixed. The SSH2_MSG_DEBUG messages for Twisted Conch clients in the ssh(1)/sshd(8) have also been disabled. Tunnel forwarding has also been fixed. In ssh(1), you can now fix a pwent clobber (introduced in openssh-7.7) that could occur during key loading, manifesting as crash on some platforms. To get a detailed overview of the features and changes introduced in portability and checksums in this new release, head over to the official release notes. JavaFX 11 to release soon, announces the Gluon team Gitlab 11.2 releases with preview changes in Web IDE, Android Project Import and more Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look
Read more
  • 0
  • 0
  • 16895

article-image-ubuntu-has-decided-to-drop-i386-32-bit-architecture-from-ubuntu-19-10-onwards
Vincy Davis
19 Jun 2019
4 min read
Save for later

Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards

Vincy Davis
19 Jun 2019
4 min read
Update: Five days after the announcement of dropping the i386 structure, Steve Langasek has now changed his stance. Yesterday, 23rd June, Langasek apologised to his users and said that this is not the case. He now claims that Ubuntu is only dropping the updates to the i386 libraries, and it will be frozen at the 18.04 LTS versions. He also mentioned that they are planning to have i386 applications including games, for versions of Ubuntu later than 19.10. This update comes after Valve Linux developer Pierre-Loup Griffais tweeted on 21st June that Steam will not support Ubuntu 19.10 and its future releases. He also recommended its users the same. Griffais has stated that are planning to switch to a different distribution and are evaluating ways to minimize breakage for their users. https://twitter.com/Plagman2/status/1142262103106973698 Between all the uncertainties of i386, Wine developers have also raised their concern because many 64-bit Windows applications still use a 32-bit installer, or some 32-bit components. Rosanne DiMesio, one of the admins in Wine’s Applications Database (AppDB) and Wiki, has said in a mail archive that there are many possibilities, such as building a pure 64 bit Wine packages for Ubuntu. Yesterday the Ubuntu engineering team announced their decision to discontinue i386 (32-bit) as an architecture, from Ubuntu 19.10 onwards. In a post to the Ubuntu Developer Mailing List, Canonical’s Steve Langasek explains that “i386 will not be included as an architecture for the 19.10 release, and we will shortly begin the process of disabling it for the eoan series across Ubuntu infrastructure.” Langasek also mentions that the specific distributions of builds, packages or distributes of the 32-bit software, libraries or tools will no longer work on the newer versions of Ubuntu. He also mentions that the Ubuntu team will be working on the 32-bit support, over the course of the 19.10 development cycle. The topic of dropping i386 systems has been in discussion among the Ubuntu developer community, since last year. One of the mails in the archive, mentions that “Less and less non-amd64-compatible i386 hardware is available for consumers to buy today from anything but computer parts recycling centers. The last of these machines were manufactured over a decade ago, and support from an increasing number of upstream projects has ended.” Earlier this year, Langasek stated in one of his mail archives that running a 32-bit i386 kernel on the recent 64-bit Intel chips had a risk of weaker security than using a 64-bit kernel. Also the usage of i386 has reduced broadly in the ecosystem, and hence it is “increasingly going to be a challenge to keep software in the Ubuntu archive buildable for this target”, he adds. Langasek also informed users that the automated upgrades to 18.10 are disabled on i386. This has been done to enable users of i386 to stay on the LTS, as it will be supported until 2023. This will help users to not be stranded on a non-LTS release, which will be supported only until early 2021. The general reaction to this news has been negative. Users have expressed outrage on the discontinuity of i386 architecture. A user on Reddit says that “Dropping support for 32-bit hosts is understandable. Dropping support for 32 bit packages is not. Why go out of your way to screw over your users?” Another user comments, “I really truly don't get it. I've been using ubuntu at least since 5.04 and I'm flabbergasted how dumb and out of sense of reality they have acted since the beginning, considering how big of a headstart they had compared to everyone else. Whether it's mir, gnome3, unity, wayland and whatever else that escapes my memory this given moment, they've shot themselves in the foot over and over again.” On Hacker News, a user commented “I have a 64-bit machine but I'm running 32-bit Debian because there's no good upgrade path, and I really don't want to reinstall because that would take months to get everything set up again. I'm running Debian not Ubuntu, but the absolute minimum they owe their users is an automatic upgrade path.” Few think that this step was needed, for the sake of riddance. Another Redditor adds,  “From a developer point of view, I say good riddance. I understand there is plenty of popular 32-bit software still being used in the wild, but each step closer to obsoleting 32-bit is one step in the right direction in my honest opinion.” Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users
Read more
  • 0
  • 0
  • 16893

article-image-amazon-managed-streaming-for-apache-kafka-amazon-msk-is-now-generally-available
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available

Amrata Joshi
03 Jun 2019
3 min read
Last week, Amazon Web Services announced the general availability of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Amazon MSK makes it easy for developers to build and run applications based on Apache Kafka without having to manage the underlying infrastructure. It is fully compatible with Apache Kafka that enables customers to easily migrate their on-premises or Amazon Elastic Cloud Compute (Amazon EC2) clusters to Amazon MSK without code changes. Customers can use Apache Kafka for capturing and analyzing real-time data streams from a range of sources, including database logs, IoT devices, financial systems, and website clickstreams. Many customers choose to self-manage their Apache Kafka clusters and they end up spending their time and cost in securing, scaling, patching, and ensuring high availability for Apache Kafka and Apache ZooKeeper clusters. But Amazon MSK offers attributes of Apache Kafka that are combined with the availability, security, and scalability of AWS. Customers can now create Apache Kafka clusters that are designed for high availability that span multiple Availability Zones (AZs) with few clicks. Amazon MSK also monitors the server health and automatically replaces servers when they fail. Customers can now easily scale out cluster storage in the AWS management console to meet changes in demand. Amazon MSK runs the Apache ZooKeeper nodes at no additional cost and provides multiple levels of security for Apache Kafka clusters which include VPC network isolation, AWS Identity and Access Management (IAM), etc. It allows customers to continue to run applications built on Apache Kafka and allow them to use Apache Kafka compatible tools and frameworks. General Manager of Amazon MSK, AWS, Rajesh Sheth, wrote to us in an email, "Customers who are running Apache Kafka have told us they want to spend less time managing infrastructure and more time building applications based on real-time streaming data.” He further added, “Amazon MSK gives these customers the ability to run Apache Kafka without having to worry about managing the underlying hardware, and it gives them an easy way to integrate their Apache Kafka applications with other AWS services. With Amazon MSK, customers can stand up Apache Kafka clusters in minutes instead of weeks, so they can spend more time focusing on the applications that impact their businesses.” Amazon MSK is currently available in the US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (Paris), EU (London), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney), and will expand to additional AWS Regions in the next year. Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU
Read more
  • 0
  • 0
  • 16889

article-image-openjdk-project-valhallas-lw2-early-access-builds-are-now-available-for-you-to-test
Bhagyashree R
09 Jul 2019
3 min read
Save for later

OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test

Bhagyashree R
09 Jul 2019
3 min read
Last week, the early access builds for OpenJDK Project Valhalla's LW2 phase was released, which was first proposed in October last year. LW2 is the next iteration of the L-World series that brings further language and JDK API support for inline types. https://twitter.com/SimmsUpNorth/status/1147087960212422658 Proposed in 2014, Project Valhalla is an experimental OpenJDK project under which the team is working on major new language features and enhancements for Java 10 and beyond. The new features and enhancements are being done in the following focus areas: Value Types Generic Specialization Reified Generics Improved 'volatile' support The LW2 specifications Javac source support Starting from LW2, the prototype is based on mainline JDK (currently version 14). That is why it requires source-level >= JDK14. To make a class declaration of inline type it uses the “inline class’ modifier or ‘@__inline__’ annotation. Interfaces, annotation types, or enums cannot be declared as inline types. The top-level, inner, or local classes may be inline types. As inline types are implicitly final, they cannot be abstract. Also, all instance fields of an inline class are implicitly final. Inline types implicitly extend ‘java.lang.Object’ similar to enums, annotation types, and interfaces. Supports "Indirect" projections of inline types via the "?" operator. javac now allows using ‘==’ and ‘!=’ operators to compare inline type. Java APIs Among the new or modified APIs include ‘isInlineClass()’, ‘asPrimaryType()’, ‘asIndirectType()’, ‘isIndirectType()’, ‘asNullableType()’, and ‘isNullableType()’. Now the ‘getName()’ method reflects the Q or L type signatures for arrays of inline types. Using ‘newInstance()’ on an inline type will throw ‘NoSuchMethodException’ and ‘setAccessible()’ will throw ‘InaccessibleObjectException’. With LW2, initial core Reflection and VarHandles support are in place. Runtime When attempting to synchronize or call wait(*) or notify*() on an inline type IllegalMonitorException will be thrown. ‘ClassCircularityError’ is thrown if loading an instance field of an inline type which declares its own type either directly ‘NotSerializableException’ will be thrown if you are attempting to serialize an inline type. If you are casting from indirect type to inline type, it may result in ‘NullPointerException’. Download the early access binaries to test this prototype. These were some of the specifications of LW2 iteration. Check out the full list of specification at OpenJDK’s official website. Also, stay tuned with the current happenings in Project Valhalla. Getting started with Z Garbage Collector(ZGC) in Java 11 [Tutorial] Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more Firefox 67 will come with faster and reliable JavaScript debugging tools
Read more
  • 0
  • 0
  • 16872

article-image-facebook-ai-powered-video-calling-devices-built-with-privacy-security
Sugandha Lahoti
09 Oct 2018
4 min read
Save for later

Facebook introduces two new AI-powered video calling devices “built with Privacy + Security in mind”

Sugandha Lahoti
09 Oct 2018
4 min read
Yesterday, Facebook launched two brand new video communication devices. Named Portal and Portal+, these devices let you video call anyone, with more richer, hands-free experiences. The Portal features a 10-inch 1280 x 800 display, while Portal+ features 15-inch 1920 x 1080.  Both devices are powered by Artificial Intelligence. This includes a Smart Camera and a Smart Sound technology. Smart Camera stays with the action and automatically pans and zooms to keep everyone in view. Smart Sound minimizes background noise and enhances the voice of whoever is talking, no matter where they move. Source: Facebook Portal can also be used to call Facebook friends and connections on Messenger even if they don’t have Portal. It also supports group calls of up to seven people at the same time. Portal also offers hands-free voice control with Amazon Alexa built-in which can be used to track sports scores, check the weather, control smart home devices, order groceries, and more.  Facebook has also enabled shared activities in its Portal devices by partnering with Spotify Premium, Pandora, iHeartRadio, Facebook Watch, Food Network, and Newsy. Keeping in mind, it’s security breach that affected 50 million users two weeks ago, Facebook says it has paid a lot of attention to privacy and security features. Per their website, “We designed Portal with tools that give you control: You can completely disable the camera and microphone with a single tap. Portal and Portal+ also come with a camera cover, so you can easily block your camera’s lens at any time and still receive incoming calls and notifications, plus use voice commands. To manage Portal access within your home, you can set a four- to 12-digit passcode to keep the screen locked. Changing the passcode requires your Facebook password. We also want to be upfront about what information Portal collects, help people understand how Facebook will use that information and explain the steps we take to keep it private and secure: Facebook doesn’t listen to, view, or keep the contents of your Portal video calls. In addition, video calls on Portal are encrypted. For added security, Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition and doesn’t identify who you are. Like other voice-enabled devices, Portal only sends voice commands to Facebook servers after you say, “Hey Portal.” You can delete your Portal’s voice history in your Facebook Activity Log at any time.” In all the above, Facebook seems quite cryptic about audio data. It also doesn’t really explain how it will use the information it collects from users. The voice data is stored on the Facebook server by default, probably to improve the Portal’s understanding on the user’s language quirks and to understand the user’s needs from the data. But it does make one wonder, should this be an opt-in and not an opt-out by default? Another jarring aspect is the need for one’s Facebook password to change the device’s passcode. This just feels like the new devices are yet another way for Facebook to add users to Facebook, not to mention the fact that Facebook just had a data breach on its site, the repercussions of which they are still investigating. In an interesting poll conducted by Dr. Jen Golbeck, Professor at UMD, on Twitter, over 63% of respondents said that they will not trust Facebook to responsibly operate a surveillance device in their home. https://twitter.com/jengolbeck/status/1049343277110054912 Read more about the devices on Facebook’s announcement. Facebook Dating app to release as a test version in Colombia. Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others How Facebook is advancing artificial intelligence [Video]
Read more
  • 0
  • 0
  • 16869

article-image-the-case-for-data-communities-why-it-takes-a-village-to-sustain-a-data-driven-business-from-whats-new
Anonymous
04 Dec 2020
9 min read
Save for later

The case for data communities: Why it takes a village to sustain a data-driven business from What's New

Anonymous
04 Dec 2020
9 min read
Forbes BrandVoice Kristin Adderson December 4, 2020 - 7:01pm December 5, 2020 Editor's note: This article originally appeared in Forbes. Data is inseparable from the future of work as more organizations embrace data to make decisions, track progress against goals and innovate their products and offerings. But to generate data insights that are truly valuable, people need to become fluent in data—to understand the data they see and participate in conversations where data is the lingua franca. Just as a professional who takes a job abroad needs to immerse herself in the native tongue, businesses who value data literacy need ways to immerse their people in the language of data.  “The best way to learn Spanish is to go to Spain for three weeks,” said Stephanie Richardson, vice president of Tableau Community. “It is similar when you’re learning the language of data. In a data community, beginners can work alongside people who know data and know how to analyze it. You’re going to have people around you that are excited. You’re going to see the language being used at its best. You’re going to see the potential.” Data communities—networks of engaged data users within an organization—represent a way for businesses to create conditions where people can immerse themselves in the language of data, encouraging data literacy and fueling excitement around data and analytics.   The best data communities provide access to data and support its use with training sessions and technical assistance, but they also build enthusiasm through programs like internal competitions, user group meetings and lunch-and-learns. Community brings people together from across the organization to share learnings, ideas and successes. These exchanges build confidence and camaraderie, lifting morale and creating them around a shared mission for improving the business with data. Those who have already invested in data communities are reaping the benefits, even during a global pandemic. People have the data training they need to act quickly in crisis and know where to go when they have questions about data sources or visualizations, speeding up communications cycles. If building a new data community seems daunting during this time, there are small steps you can take to set a foundation for larger initiatives in the future.   Data communities in a work-from-home world Before Covid-19, organizations knew collaboration was important. But now, when many work remotely, people are disconnected and further removed from business priorities. Data and analytics communities can be a unifying force that focuses people on the same goals and gives them a dedicated space to connect. For businesses wanting to keep their people active, engaged and innovating with their colleagues, data communities are a sound investment.   “Community doesn’t have to be face-to-face activities and big events,” said Audrey Strohm, enterprise communities specialist at Tableau. “You participate in a community when you post a question to your organization’s internal discussion forum—whenever you take an action to be in the loop.”  Data communities are well suited for remote collaboration and virtual connection. Some traits of a thriving data community—fresh content, frequent recognition and small, attainable incentives for participation—apply no matter where its members reside. Data communities can also spark participation by providing a virtual venue, such as an internal chat channel or forum, where members can discuss challenges or share advice. Instead of spending hours spinning in circles, employees can log on and ask a question, access resources or find the right point of contact—all in a protected setting. Inside a data community at JP Morgan Chase JPMorgan Chase developed a data community to support data activities and to nurture a data culture. It emphasized immersion, rapid feedback and a gamified structure with skill belts—a concept similar to how students of the martial arts advance through the ranks. Its story shows that, sometimes, a focus on skills is not enough—oftentimes, you need community support. Speaking at Tableau Conference 2019, Heather Gough, a software engineer at the financial services company, shared three tips based on the data community at JPMorgan Chase: 1. Encourage learners to develop skills with any kind of data. Training approaches that center on projects challenge learners to show off their skills with a data set that reflects their personal interests. This gives learners a chance to inject their own passion and keeps the projects interesting for the trainers who evaluate their skills. 2. Not everyone will reach the mountain top, and that’s okay. Most participants don’t reach the top skill tier. Even those who only advance partway through a skill belt or other data literacy program still learn valuable new skills they can talk about and share with others. That’s the real goal, not the accumulation of credentials. 3. Sharing must be included in the design. Part of the progression through the ranks includes spending time sharing newly learned data skills with others. This practice scales as learners become more sophisticated, from fielding just a few questions at low levels to exchanging knowledge with other learners at the top tiers.  How to foster data communities and literacy While you may not be able to completely shift your priorities to fully invest in a data community right now, you can lay the groundwork for a community by taking a few steps, starting with these: 1. Focus on business needs The most effective way to stir excitement and adoption of data collaboration is to connect analytics training and community-related activities to relevant business needs. Teach people how to access the most critical data sources, and showcase dashboards from across the company to show how other teams are using data.  Struggling to adapt to new challenges? Bring people together from across business units to innovate and share expertise. Are your data resources going unused? Imagine if people in your organization were excited about using data to inform their decision making. They would seek those resources rather than begrudgingly look once or twice. Are people still not finding useful insights in their data after being trained? Your people might need to see a more direct connection to their work.  “Foundational data skills create a competitive advantage for individuals and organizations,” said Courtney Totten, director of academic programs at Tableau.  When these efforts are supported by community initiatives, you can address business needs faster because you’re all trained to look at the same metrics and work together to solve business challenges. 2. Empower Your Existing Data Leaders The future leaders of your data community shouldn’t be hard to find. Chances are, they are already in your organization, advocating for more opportunities to explore, understand and communicate with data. Leaders responsible for building a data community do not have to be the organization’s top data experts, but they should provide empathic guidance and inject enthusiasm. These people may have already set up informal structures to promote data internally, such as a peer-driven messaging channel. Natural enthusiasm and energy are extremely valuable to create an authentic and thriving community. Find the people who have already volunteered to help others on their data journeys and give them a stake in the development and management of the community. A reliable leader will need to maintain the community platform and ensure that it keeps its momentum over time. 3. Treat Community Like a Strategic Investment Data communities can foster more engagement with data assets—data sources, dashboards and workbooks. But they can only make a significant impact when they’re properly supported. “People often neglect some of the infrastructure that helps maximize the impact of engagement activities,” Strohm said. “Community needs to be thought of as a strategic investment.”  Data communities need a centralized resource hub that makes it easy to connect from anywhere, share a wide variety of resources and participate in learning modules. Other investments include freeing up a small amount of people’s time to engage in the community and assigning a dedicated community leader. Some communities fail when people don’t feel as though they can take time away from the immediate task at hand to really connect and collaborate. Also, communities aren’t sustainable when they’re entirely run by volunteers. If you can’t invest in a fully dedicated community leader at this time, consider opening up a small portion of someone’s role so they can help build or run community programs. 4. Promote Participation at Every Level Executive leadership needs to do more than just sponsor data communities and mandate data literacy. They need to be visible, model members. That doesn’t mean fighting to the top of every skill tree. Executives should, however, engage in discussions about being accountable for data-driven decisions and be open to fielding tough questions about their own use of data. “If you’re expecting your people to be vulnerable, to reach out with questions, to see data as approachable, you can help in this by also being vulnerable and asking questions when you have them,” said Strohm. 5. Adopt a Data Literacy Framework Decide what your contributors need to know for them to be considered data literate. The criteria may include learning database fundamentals and understanding the mathematical and statistical underpinnings of correlation and causation. Ready-made programs such as Tableau’s Data Literacy for All provide foundational training across all skill levels. Data communities give everyone in your organization a venue to collaborate on complex business challenges and reduce uncertainty. Ask your passionate data advocates what they need to communicate more effectively with their colleagues. Recruit participants who are eager to learn and share. And don’t be afraid to pose difficult questions about business recovery and growth, especially as everyone continues to grapple with the pandemic. Communities rally around a common cause. Visit Tableau.com to learn how to develop data communities and explore stories of data-driven collaboration.  
Read more
  • 0
  • 0
  • 16867
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-query-interact-with-apps-in-android-11-with-package-visibility-from-xamarin-blog
Matthew Emerick
15 Oct 2020
4 min read
Save for later

Query & Interact with Apps in Android 11 with Package Visibility from Xamarin Blog

Matthew Emerick
15 Oct 2020
4 min read
Android 11 introduced several exciting updates for developers to integrate into their app experience including new device and media controls, enhanced support for foldables, and a lot more. In addition to new features there are also several privacy enhancements that developers need to integrate into their application when upgraded and re-targeting to Android 11. One of those enhancements is the introduction of package visibility that alters the ability to query installed applications and packages on a user’s device. When you want to open a browser or send an email then your application will have to launch and interact with another application on the device through an Intent. Before calling StartActivity it is best practice to QueryIntentActivities or ResolveActivity to ensure there is an application that can handle the request. If you are using Xamarin.Essentials, then you may not have seen these APIs because the library handles all of the logic for you automatically for Browser(External), Email, and SMS. Before Android 11 every app could easily query all installed applications and see if a specific Intent would open when StartActivity is called. That has all changed with Android 11 with the introduction of package visibility. You will now need to declare what intents and data schemes you want your app to be able to query when your app is targeting Android 11. Once you retarget to Android 11 and run your application on a device running Android 11 you will receive zero results if you use QueryIntentActivities. If you are using Xamarin.Essentials you will receive a FeatureNotSupportedException when you try to call one of the APIs that needs to query activities. Let’s say you are using the Email feature of Xamarin.Essentials. Your code may look like this: public async Task SendEmail(string subject, string body, List<string> recipients) { try { var message = new EmailMessage { Subject = subject, Body = body, To = recipients }; await Email.ComposeAsync(message); } catch (FeatureNotSupportedException fbsEx) { // Email is not supported on this device } catch (Exception ex) { // Some other exception occurred } } If your app targeted Android 10 and earlier, it would just work. With package visibility in Android 11 when you try to send an Email, Xamarin.Essentials will try to query for pacakges that support email and zero results will be return. This will result in a FeatureNotSupportedException to be thrown, which is not ideal. To enable your application to get visbility into the packages you will need to add a list of queries into your AndroidManifest.xml. <manifest package="com.mycompany.myapp"> <queries> <intent> <action android:name="android.intent.action.SENDTO" /> <data android:scheme="mailto" /> </intent> </queries> </manifest> If you need query multiple intents or use multiple APIs you will need to add them all into the list. <queries> <intent> <action android:name="android.intent.action.SENDTO" /> <data android:scheme="mailto" /> </intent> <intent> <action android:name="android.intent.action.VIEW" /> <data android:scheme="http"/> </intent> <intent> <action android:name="android.intent.action.VIEW" /> <data android:scheme="https"/> </intent> <intent> <action android:name="android.intent.action.VIEW" /> <data android:scheme="smsto"/> </intent> </queries> And there you have it, with just a small amount of configuration you are app will continue to work flawless when you target Android 11. Learn More Be sure to browse through the official Android 11 documentation on package visibility, and of course the newly updated Xamarin.Essentials documentation. Finally, be sure to read through the Xamarin.Android 11 release notes. The post Query & Interact with Apps in Android 11 with Package Visibility appeared first on Xamarin Blog.
Read more
  • 0
  • 0
  • 16867

article-image-google-launches-enterprise-edition-dialogflow
Sugandha Lahoti
20 Nov 2017
3 min read
Save for later

Google launches the Enterprise edition of Dialogflow, its chatbot API

Sugandha Lahoti
20 Nov 2017
3 min read
Google has recently announced the enterprise edition of Dialogflow, its Chatbot API. Dialogflow is Google’s API for building chatbots as well as other conversational interfaces for mobile applications, websites, messaging platforms, and IoT devices. It uses machine learning and natural language processing in the backend to power it’s conversational interfaces. It also has a built-in speech recognition support and features new analytics capabilities. Now they have extended the API to the enterprises, allowing organizations to build these conversational apps for a large scale usage. According to Google, Dialogflow Enterprise Edition is a premium pay-as-you-go service. It is targeted at organizations in need of enterprise-grade services that can withstand changes based on user demands. As opposed to the small and medium business owners and individual developers for whom the standard version suffices. The enterprise edition also boasts of 24/7 support, SLAs, enterprise-level terms of service and complete data protection which is why companies are willing to pay a fee for adopting it. Here’s a quick overview of the differences between the standard and the enterprise version of Dialogflow: Source: https://cloud.google.com/dialogflow-enterprise/docs/editions Apart from this, the API is also a part of Google Cloud. So, it comes with the same support options as provided to cloud platform customers. The enterprise edition also supports unlimited text and voice interactions and higher usage quotas as compared to the free version. It's Enterprise Edition agent can be created using the Google Cloud Platform Console. Adding, editing or removing entities and intents to the agent can be done using console.dialogflow.com, or with the Dialogflow V2 API. Here’s a quick glance at some top features: Natural language Understanding, allows quick extraction and response of a user’s intent to implement natural and rich interactions between users and businesses. Over 30+ pre-built agents for quick and easy identification of custom entity types. An integrated code editor, to build native serverless applications linked with conversational interfaces through Cloud Functions for Firebase. Integration with Google Cloud Speech,  for voice interactions, support in a single API Cross-Platform and Multi-Language Agent, with 20+ languages supported over 14 different platforms. Uniqlo has used Dialogflow to create their shopping chatbot. Here are the views of Shinya Matsuyama, Director of Global digital commerce, Uniqlo: “Our shopping chatbot was developed using Dialogflow to offer a new type of shopping experience through a messaging interface, and responses are constantly being improved through machine learning. Going forward, we are also looking to expand the functionality to include voice recognition and multiple languages. ” According to the official documentation, the project is still in beta stage. Hence, it is not intended for real-time usage in critical applications. You can learn more about the project along with Quickstarts, How-to guides, and Tutorials here.
Read more
  • 0
  • 0
  • 16864

Matthew Emerick
21 Aug 2020
3 min read
Save for later

Introduction to props in React from ui.dev's RSS Feed

Matthew Emerick
21 Aug 2020
3 min read
Whenever you have a system that is reliant upon composition, it’s critical that each piece of that system has an interface for accepting data from outside of itself. You can see this clearly illustrated by looking at something you’re already familiar with, functions. function getProfilePic (username) { return 'https://photo.fb.com/' + username } function getProfileLink (username) { return 'https://www.fb.com/' + username } function getAvatarInfo (username) { return { pic: getProfilePic(username), link: getProfileLink(username) } } getAvatarInfo('tylermcginnis') We’ve seen this code before as our very soft introduction to function composition. Without the ability to pass data, in this case username, to each of our of functions, our composition would break down. Similarly, because React relies heavily on composition, there needs to exist a way to pass data into components. This brings us to our next important React concept, props. Props are to components what arguments are to functions. Again, the same intuition you have about functions and passing arguments to functions can be directly applied to components and passing props to components. There are two parts to understanding how props work. First is how to pass data into components, and second is accessing the data once it’s been passed in. Passing data to a component This one should feel natural because you’ve been doing something similar ever since you learned HTML. You pass data to a React component the same way you’d set an attribute on an HTML element. <img src='' /> <Hello name='Tyler' /> In the example above, we’re passing in a name prop to the Hello component. Accessing props Now the next question is, how do you access the props that are being passed to a component? In a class component, you can get access to props from the props key on the component’s instance (this). class Hello extends React.Component { render() { return ( <h1>Hello, {this.props.name}</h1> ) } } Each prop that is passed to a component is added as a key on this.props. If no props are passed to a component, this.props will be an empty object. class Hello extends React.Component { render() { return ( <h1>Hello, {this.props.first} {this.props.last}</h1> ) } } <Hello first='Tyler' last='McGinnis' /> It’s important to note that we’re not limited to what we can pass as props to components. Just like we can pass functions as arguments to other functions, we’re also able to pass components (or really anything we want) as props to other components. <Profile username='tylermcginnis' authed={true} logout={() => handleLogout()} header={<h1>👋</h1>} /> If you pass a prop without a value, that value will be set to true. These are equivalent. <Profile authed={true} /> <Profile authed />
Read more
  • 0
  • 0
  • 16864

article-image-react-native-announces-re-architecture-of-the-framework-for-better-performance
Kunal Chaudhari
22 Jun 2018
4 min read
Save for later

React Native announces re-architecture of the framework for better performance

Kunal Chaudhari
22 Jun 2018
4 min read
React Native, the cross-platform mobile development framework from Facebook is going under a complete rewrite with a focus on better flexibility and improved integration with native infrastructure. Why React Native 5 years ago when React Native was announced at React.js conf 2015, Facebook opened the doors for web developers and engineers who wanted to take their existing skill set into the world of mobile development. Since then React Native has been nothing short of a phenomenon. React Native has come a long way since then, becoming the 13th most popular open source project on Github. React Native came with the promise of revolutionizing the user interface development process with its core set of cross-platform UI primitives, and its popular declarative rendering pattern. Previously, there have been many frameworks which branded themselves as “Cross-Platform” like Ionic and Cordova, but simply put, they just rendered inside a web view, or an “HTML5 app,” or a “hybrid app.” These apps lacked the native feel of an Android/iOS app made with Java/Swift and led to a terrible user experience. React Native, on the other hand, works a bit differently where the User Interface(UI) components are kept in the native block and the business logic is kept in the JavaScript block. At any user interaction/request, the UI block detects the change and sends it to the JavaScript block, which processes the request and sends back the data to the UI block. This allows the UI block to perform with native experience since the processing is done somewhere else. The Dawn Of A New Beginning As cool as these features may sound, working with React Native is quite difficult. If there is a feature that you need to add that is not yet supported by the React Native library, developers have to write their own Native Module in the corresponding language, which can then be linked to the React Native codebase. There are several native modules which are not present in the ecosystem like gesture-handling and native navigation. Complex hacks are required to include them in the native components. For apps with complex integration between React Native and existing app code, this is frustrating. Sophie Alpert, Engineering Manager at Facebook, mentioned in a blog post named State of React 2018, “We’re rewriting many of React Native’s internals, but most of the changes are under the hood: existing React Native apps will continue to work with few or no changes.” This comes with no surprise as clearly Facebook cares about developer experience and hence decided to go ahead with this architectural change with almost no breaking changes. A similar move which was applauded was when they transitioned to React Fiber. This new architectural change is in favor of making the framework more lightweight and better fit into existing native apps involving three major internal changes: New and improved threading Model It will be possible to call synchronously into JavaScript on any thread for high-priority updates while keeping low-priority work off the main thread. New Async Rendering Capabilities This will allow multiple rendering priorities and to simplify asynchronous data handling Lighter and faster bridge Direct calls between native and JavaScript are more efficient and will make it easier to build debugging tools like cross-language stack traces. Along with these architectural changes Facebook also hinted to slim down React Native to make it fit better with the JavaScript ecosystem. This includes making the VM and bundler swappable. React Native is a brilliantly designed cross-platform framework which gave a new dimension to mobile development and a new hope to web developers. Is this restructuring going to cement its place as a top player in the mobile development marketplace? Only time will tell. Till then you can read more about the upcoming changes on their official website. Is React Native is really Native framework? Building VR experiences with React VR 2.0 Jest 23, Facebook’s popular framework for testing React applications is now released
Read more
  • 0
  • 0
  • 16841
article-image-developers-lives-matter-chinese-developers-protest-over-the-996-work-schedule-on-github
Natasha Mathur
29 Mar 2019
3 min read
Save for later

'Developers' lives matter': Chinese developers protest over the “996 work schedule” on GitHub

Natasha Mathur
29 Mar 2019
3 min read
Working long hours at a company, devoid of any work-life balance, is rife in China’s tech industry. Earlier this week on Tuesday, a Github user with the name “996icu” created a webpage that he shared on GitHub, to protest against the “996” work culture in Chinese tech companies. The “996” work culture is an unofficial work schedule that requires employees to work from 9 am to 9 pm, 6 days a week, totaling up to 60 hours of work per week. The 99icu webpage mentions the Labor Law of the People’s Republic of China, according to which, an employer can ask its employees to work long hours due to needs of production or businesses. But, the work time to be prolonged should not exceed 36 hours a month. Also, as per the Labor Law, employees following the "996" work schedule should be paid 2.275 times of their base salary. However, this is not the case in reality and Chinese employees following the 996 work rule rarely get paid that much. GitHub users also called out to companies like Youzan and Jingdong, who both follow the 996 work rule. The webpage cites example of a Jingdong PR who posted on their maimai ( Chinese business social network) account that "(Our culture is to devote ourselves with all our hearts (to achieve the business objectives)". 996 work schedule started to gain popularity in recent years but has been a “secret practice” for quite a while. The 996icu webpage went viral online and ranked first on GitHub’s trending page on Thursday. It currently has amassed more than 90,000 stars (a post bookmarking tool). The post is also being widely shared on Chinese social media platforms such as Weibo and WeChat, where many users are talking about their experiences as tech workers who followed the 996 schedule. This gladiatorial work environment in Chinese firms has long been a bone of contention. South China Morning Post writer Zheping Huang published a post sharing stories of different Chinese tech employees who shed light on the grotesque reality of China’s Silicon Valley. One such example is of a 33-year-old Beijing native, Yang, who works as a product manager in a Chinese internet company. Yang wakes up at 6 am every day to get through a two-and-a-half-hour commute to reach work. Another example is of Bu, a 20-something marketing specialist who relocated to an old complex near her workplace. She pays high rent, shares room with two other women, and no longer has access to coffee shops or good restaurants. A user named “discordance” on Hacker News commented regarding the GitHub protest, asking developers in China to move to better companies. “Leave your company, take your colleagues and start one with better conditions. You are some of the best engineers I've worked with and deserve better”. Another user “ceohockey60”  commented: “The Chinese colloquial term for a developer is "码农". Its literal English translation is "code peasants" -- not the most flattering or respectful way to call software engineers. I've recently heard horror stories, where 9-9-6 is no longer enough inside one of the Chinese tech giants, and 10-10-7 is expected (10am-10pm, 7 days/week)”. The 996icu webpage states that people who “consistently follow the "996" work schedule.. run the risk of getting..into the Intensive Care Unit. Developers' lives matter”. What the US-China tech and AI arms race means for the world – Frederick Kempe at Davos 2019 China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience
Read more
  • 0
  • 0
  • 16841

article-image-near-real-time-nrt-applications-work
Amarabha Banerjee
10 Nov 2017
6 min read
Save for later

How Near Real Time (NRT) Applications work

Amarabha Banerjee
10 Nov 2017
6 min read
[box type="note" align="" class="" width=""]In this article by Shilpi Saxena and Saurabh Gupta from their book Practical Real-time data Processing and Analytics we shall explore what a near real time architecture looks like and how an NRT app works. [/box] It's very important to understand the key aspects where the traditional monolithic application systems are falling short to serve the need of the hour: Backend DB: Single point monolithic data access. Ingestion flow: The pipelines are complex and tend to induce latency in end to end flow. Systems are failure prone, but the recovery approach is difficult and complex. Synchronization and state capture: It's very difficult to capture and maintain the state of facts and transactions in the system. Getting diversely distributed systems and real-time system failures further complicate the design and maintenance of such systems. The answer to the above issues is an architecture that supports streaming and thus provides its end users access to actionable insights in real-time over ever flowing in-streams of real-time fact data. Local state and consistency of the system for large scale high velocity systems Data doesn't arrive at intervals, it keeps flowing in, and it's streaming in all the time No single state of truth in the form of backend database, instead the applications subscribe or tap into stream of fact data Before we delve further, it's worthwhile to understand the notation of time: Looking at this figure, it's very clear to correlate the SLAs with each type of implementation (batch, near real-time, and real-time) and the kinds of use cases each implementation caters to. For instance, batch implementations have SLAs ranging from a couple of hours to days and such solutions are predominantly deployed for canned/pre-generated reports and trends. The real-time solutions have an SLA of a magnitude of few seconds to hours and cater to situations requiring ad-hoc queries, mid-resolution aggregators, and so on. The real-time application's most mission-critical in terms of SLA and resolutions are where each event accounts for and the results have to return within an order of milliseconds to seconds. Near real time (NRT) Architecture In its essence, NRT Architecture consists of four main components/layers, as depicted in the following figure: The message transport pipeline The stream processing component The low-latency data store Visualization and analytical tools The first step is the collection of data from the source and providing for the same to the "data pipeline", which actually is a logical pipeline that collects the continuous events or streaming data from various producers and provides the same to the consumer stream processing applications. These applications transform, collate, correlate, aggregate, and perform a variety of other operations on this live streaming data and then finally store the results in the low-latency data store. Then, there is a variety of analytical, business intelligence, and visualization tools and dashboards that read this data from the data store and present it to the business user. Data collection This is the beginning of the journey of all data processing, be it batch or real time the foremost and most forthright is the challenge to get the data from its source to the systems for our processing. If I can look at the processing unit as a black box and a data source, and at consumers as publishers and subscribers. It's captured in the following diagram: The key aspects that come under the criteria for data collection tools in the general context of big data and real-time specifically are as follows: Performance and low latency Scalability Ability to handle structured and unstructured data Apart from this, the data collection tool should be able to cater to data from a variety of sources such as: Data from traditional transactional systems: To duplicate the ETL process of these traditional systems and tap the data from the source Tap the data from these ETL systems The third and a better approach is to go the virtual data lake architecture for data replication. Structured data from IoT/ Sensors/Devices, or CDRs: This is the data that comes at a very high velocity and in a fixed format – the data can be from a variety of sensors and telecom devices. Unstructured data from media files, text data, social media, and so on: This is the most complex of all incoming data where the complexity is due to the dimensions of volume, velocity, variety, and structure. Stream processing The stream processing component itself consists of three main sub-components, which are: The Broker: that collects and holds the events or data streams from the data collection agents. The "Processing Engine": that actually transforms, correlates, aggregates the data, and performs the other necessary operations The "Distributed Cache": that actually serves as a mechanism for maintaining common data set across all distributed components of the processing engine The same aspects of the stream processing component are zoomed out and depicted in the diagram as follows: There are few key attributes that should be catered to by the stream processing component: Distributed components thus offering resilience to failures Scalability to cater to growing need of the application or sudden surge of traffic Low latency to handle the overall SLAs expected from such application Easy operationalization of use case to be able to support the evolving use cases Build for failures, the system should be able to recover from inevitable failures without any event loss, and should be able to reprocess from the point it failed Easy integration points with respect to off-heap/distributed cache or data stores A wide variety of operations, extensions, and functions to work with business requirements of the use case Analytical layer - serve it to the end user The analytical layer is the most creative and interesting of all the components of an NRT application. So far, all we have talked about is backend processing, but this is the layer where we actually present the output/insights to the end user graphically, visually in form of an actionable item. A few of the challenges these visualization systems should be capable of handling are: Need for speed Understanding the data and presenting it in the right context Dealing with outliers The figure depicts the flow of information from event producers to the collection agents, followed by the brokers and processing engine (transformation, aggregation, and so on) and then the long-term storage. From the storage unit, the visualization tools reap the insights and present them in form of graphs, alerts, charts, Excel sheets, dashboards, or maps, to the business owners who can assimilate the information and take some action based upon it. The above was an excerpt from the book Practical Real-time data Processing and Analytics.
Read more
  • 0
  • 0
  • 16833

article-image-baidu-adds-paddle-lite-2-0-new-development-kits-easydl-pro-and-other-upgrades-to-its-paddlepaddle-deep-learning-platform
Vincy Davis
15 Nov 2019
3 min read
Save for later

Baidu adds Paddle Lite 2.0, new development kits, EasyDL Pro, and other upgrades to its PaddlePaddle deep learning platform

Vincy Davis
15 Nov 2019
3 min read
Yesterday, Baidu’s deep learning open platform PaddlePaddle (PArallel Distributed Deep LEarning), released its latest version with 21 new products such as Paddle Lite 2.0, four end-to-end development kits including ERNIE for semantic understanding (NLP), toolkits and other new upgrades. PaddlePaddle is an easy-to-use, flexible and scalable deep learning platform developed for applying deep learning to many products at Baidu. Paddle Lite 2.0 The main goal of Paddle Lite is to maintain low latency and high-efficiency of AI applications when they are running on resource-constrained devices. Launched last year, Paddle Lite is customized for inference on mobile, embedded, and IoT devices. It is also compatible with PaddlePaddle and other pre-trained models. With enhanced usability in Paddle Lite 2.0, developers can deploy ResNet-50 with seven lines of code. The new version has added support for more hardware units such as edge-based FPGA and also permits low-precision inference using operators with the INT8 data type. New development kits Development kits aim to continuously reduce the development threshold for low-cost and rapid model constructions. ERNIE for semantic understanding (NLP): ERNIE (Enhanced Representation through kNowledge IntEgration) is a continual pre-training framework for semantic understanding. Earlier this year in July, Baidu had open sourced ERNIE 2.0 model and revealed that ERNIE 2.0 outperformed BERT and XLNet in 16 NLP tasks, including English tasks on GLUE benchmarks and several Chinese tasks. PaddleDetection: It has more than 60 easy-to-use object detection models. PaddleSeg for computer vision (CV): It is an end-to-end image segmentation library that supports data augmentation, modular design, and end-to-end deployment. Elastic CTR for recommendation: Elastic CTR is a newly released solution that provides process documentation for distributed training on Kubernetes (k8s) clusters. It also provides the distributed parameter deployment forecasts as a one-click solution. EasyDL Pro EasyDL is an AI platform for novice developers to train and build custom models via a drag-and-drop interface. EasyDL Pro is a one-stop AI development platform for algorithm engineers to deploy AI models with fewer lines of code. Master mode The Master mode will help developers customize models for specific tasks. It has a large library of pre-trained models and tools for transfer learning. Other new upgrades New toolkits like graph, federated and multi-task learning. API’s upgraded for flexibility, usability, and improved documentation. A new PaddlePaddle module for model compression called PaddleSlim is added to enable a quantitative training function and a hardware-based small model search capability. Paddle2ONNX and X2Paddle are upgraded for improved conversion of trained models from PaddlePaddle to other frameworks. Head over to Baidu’s blog for more details. Baidu open sources ‘OpenEdge’ to create a ‘lightweight, secure, reliable and scalable edge computing community’ Unity and Baidu collaborate for simulating the development of autonomous vehicles CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries GitHub Universe 2019: GitHub for mobile, GitHub Archive Program and more announced amid protests against GitHub’s ICE contract Brave 1.0 releases with focus on user privacy, crypto currency-centric private ads and payment platform
Read more
  • 0
  • 0
  • 16826
article-image-12000-unsecured-mongodb-databases-deleted-by-unistellar-attackers
Vincy Davis
21 May 2019
3 min read
Save for later

12,000+ unsecured MongoDB databases deleted by Unistellar attackers

Vincy Davis
21 May 2019
3 min read
Over the last three weeks, more than 12,000 unsecured MongoDB databases have been deleted. The cyber-extortionist have left only an email contact, most likely to negotiate the terms of data recovery. Attackers looking for exposed database servers use BinaryEdge or Shodan search engines to delete them and usually demand a ransom for their 'restoration services'. MongoDB is not new to such attacks, previously in September 2017 MongoDB databases were hacked, for ransom. Also, earlier this month, Security Discovery researcher Bob Diachenko found an unprotected MongoDB database which exposed 275M personal records of Indian citizens. The record contained a personal detailed identifiable information such as name, gender, date of birth, email, mobile phone number, and many more. This information was left exposed and unprotected on the Internet for more than two weeks. https://twitter.com/MayhemDayOne/status/1126151393927102464 The latest attack on MongoDB database was found out by Sanyam Jain, an independent security researcher. Sanyam first noticed the attacks on April 24, when he initially discovered a wiped MongoDB database. Instead of finding the huge quantities of leaked data, he found a note stating: “Restore ? Contact : unistellar@yandex.com”. It was later discovered that the cyber-extortionists have left behind ransom notes asking the victims to get in touch, if they want to restore their data. Two email addresses were provided for the same: unistellar@hotmail.com or unistellar@yandex.com. This method to find and wipe databases in such large numbers is expected to be automated by the attackers. The script or program used to connect to the publicly accessible MongoDB databases is also configured to indiscriminately delete every unsecured MongoDB it can find and later add it to the ransom table. In a statement to Bleeping Computer, Sanyam Jain says, “the Unistellar attackers seem to have created restore points to be able to restore the databases they deleted” Bleeping Computer have stated that there is no way to track if the victims have been paying for the databases to be restored because Unistellar only provides an email to be contacted and no cryptocurrency address is provided. Bleeping Computer also tried to get in touch with Unistellar to confirm if the wiped MongoDB databases are indeed backed up and if any victim have already paid for their "restoration services" but got no response. How to secure MongoDB databases MongoDB databases are remotely accessible and access to them is not properly secured. These frequent attacks highlight the need for an effective protection of data. This is possible by following fairly simple steps designed to properly secure one’s database. Users should take the simple preventive measure of enabling authentication and not allowing the databases to be remotely accessible. MongoDB has also provided a detailed manual for Security. It includes various features, such as authentication, access control, encryption, to secure a MongoDB deployments. There’s also a Security Checklist for administrators to protect the MongoDB deployment. The list discusses the proper way of enforcing authentication, enabling role-based access control, encrypt communication, limiting network exposure and many more factors for effectively securing MongoDB databases. To know more about this news in detail, head over to Bleeping Computer’s complete coverage. MongoDB is going to acquire Realm, the mobile database management system, for $39 million MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process GNU Health Federation message and authentication server drops MongoDB and adopts PostgreSQL
Read more
  • 0
  • 0
  • 16824

article-image-prevent-planned-downtime-during-the-holiday-shopping-season-with-cloud-sql-from-cloud-blog
Matthew Emerick
15 Oct 2020
3 min read
Save for later

Prevent planned downtime during the holiday shopping season with Cloud SQL from Cloud Blog

Matthew Emerick
15 Oct 2020
3 min read
Routine database maintenance is a way of life. Updates keep your business running smoothly and securely. And with a managed service, like Cloud SQL, your databases automatically receive the latest patches and updates, with significantly less downtime. But we get it: Nobody likes downtime, no matter how brief.  That's why we’re pleased to announce that Cloud SQL, our fully managed database service for MySQL, PostgreSQL, and SQL Server, now gives you more control over when your instances undergo routine maintenance. Cloud SQL is introducing maintenance deny period controls. With maintenance deny periods, you can prevent automatic maintenance from occurring during a 90-day time period.  This can be especially useful for the Cloud SQL retail customers about to kick off their busiest time of year, with Black Friday and Cyber Monday just around the corner. This holiday shopping season is a time of peak load that requires heightened focus on infrastructure stability, and any upgrades can put that at risk. By setting a maintenance deny period from mid-October to mid-January, these businesses can prevent planned upgrades from Cloud SQL during this critical time. Understanding Cloud SQL maintenanceBefore describing these new controls, let’s answer a few questions we often hear about the automatic maintenance that Cloud SQL performs. What is automatic maintenance?To keep your databases stable and secure, Cloud SQL automatically patches and updates your database instance (MySQL, Postgres, and SQL Server), including the underlying operating system. To perform maintenance, Cloud SQL must temporarily take your instances offline. What is a maintenance window?Maintenance windows allow you to control when maintenance occurs. Cloud SQL offers maintenance windows to minimize the impact of planned maintenance downtime to your applications and your business.  Defining the maintenance window lets you set the hour and day when an update occurs, such as only when database activity is low (for example, on Saturday at midnight).  Additionally, you can control the order of updates for your instance relative to other instances in the same project (“Earlier” or “Later”). Earlier timing is useful for test instances, allowing you to see the effects of an update before it reaches your production instances.  What are the new maintenance deny period controls?You can now set a single deny period, configurable from 1 to 90 days, each year. During the deny period, Cloud SQL will not perform maintenance that causes downtime on your database instance. Deny periods can be set to reduce the likelihood of downtime during the busy holiday season, your next product launch, end of quarter financial reporting, or any other important time for your business. Paired with Cloud SQL’s existing maintenance notification and rescheduling functionality, deny periods give you even more flexibility and control. After receiving a notification of upcoming maintenance, you can reschedule ad hoc, or if you want to prevent maintenance longer, set a deny period.  Getting started with Cloud SQL’s new maintenance controlReview our documentation to learn more about maintenance deny periods and, when you're ready, start configuring them for your database instances.  What’s next for Cloud SQLSupport for additional maintenance controls continues to be a top request from users. These new deny periods are an addition to the list of existing maintenance controls for Cloud SQL. Have more ideas? Let us know what other features and capabilities you need with our Issue Tracker and by joining the Cloud SQL discussion group. We’re glad you’re along for the ride, and we look forward to your feedback!
Read more
  • 0
  • 0
  • 16824
Modal Close icon
Modal Close icon