Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-zefflin-systems-unveils-servicenow-plugin-for-red-hat-ansible-2-0
Savia Lobo
29 Jun 2018
2 min read
Save for later

Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0

Savia Lobo
29 Jun 2018
2 min read
Zefflin Systems announced its ServiceNow Plugin 2.0 for the Red Hat Ansible 2.0. The plugin helps IT operations easily map IT services to infrastructure for automatically deployed environment. Zefflin's Plugin Release 2.0 enables the use of ServiceNow Catalog and Request management modules to: Facilitate deployment options for users Capture requests and route them for approval Invoke Ansible playbooks to auto-deploy server, storage, and networking Zefflin's Plugin 2.0 also provides full integration to ServiceNow Change Management for complete ITIL-compliant auditability. Key features and benefits of the ServiceNow Plugin 2.0 are: Support for AWX: With the help of AWX, customers who are on the open source version of Ansible can easily integrate into ServiceNow. Automated Catalog Variable Creation: Plugin 2.0 reads the target Ansible playbook and automatically creates the input variables in the ServiceNow catalog entry. This significantly reduces implementation time and maintenance effort. This means that the new playbooks can be onboarded in less time. Update to Ansible Job Completion: This extends the amount of information returned from an Ansible playbook and logged into the ServiceNow request. This enhancement dramatically improves the audit trail and provides a higher degree of process control. The ServiceNow Plugin for Ansible enables DevOps with ServiceNow integration by establishing: Standardized development architectures An effective routing approval process An ITIL-compliant audit framework Faster deployment An automated process that frees up the team to focus on other activities Read more about the ServiceNow Plugin in detail on Zefflin System’s official blog post Mastering Ansible – Protecting Your Secrets with Ansible An In-depth Look at Ansible Plugins Installing Red Hat CloudForms on Red Hat OpenStack
Read more
  • 0
  • 0
  • 12554

article-image-daily-coping-24-dec-2020-from-blog-posts-sqlservercentral
Anonymous
24 Dec 2020
2 min read
Save for later

Daily Coping 24 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
24 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to Give away something you have been holding on to. I have made more donations this year and in the past,. Partially I think this is because life slowed down and I had time to clean out some spaces. However, I have more to do, and when I saw this item, I decided to do something new. I’m a big supporter of Habitat for Humanity. During my first sabbatical, I volunteered there quite a bit, and I’ve continued to do that periodically since. I believe shelter is an important resource most people need. site:I’ve had some tools at the house that I’ve held onto, thinking they would be good spares. I have a few cordless items, but I have an older miter saw and a table saw that work fine. Habitat doesn’t take these, but I donated them to another local charity that can make use of them. I’m hoping someone will use them to improve their lives, either building something or maybe using them in their work. The post Daily Coping 24 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 12553

article-image-european-consumer-groups-accuse-google-of-tracking-its-users-location-calls-it-a-breach-of-gdpr
Sugandha Lahoti
29 Nov 2018
4 min read
Save for later

European Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPR

Sugandha Lahoti
29 Nov 2018
4 min read
Just when Google is facing large walkouts and protests against its policies, another consumer group has lodged a complaint against Google’s user tracking. According to a report published by the European Consumer Organisation (BEUC), Google is using various methods to encourage users to enable the settings ‘location history’ and ‘web and app activity’ which are integrated into all Google user accounts. They allege that Google is using these features to facilitate targeted advertising. BEUC and its members including those from the Czech Republic, Greece, Norway, Slovenia, and Sweden argue that what Google is doing is in breach of the GDPR. Per the report, BEUC says “We argue that consumers are deceived into being tracked when they use Google services. This happens through a variety of techniques, including withholding or hiding information, deceptive design practices, and bundling of services. We argue that these practices are unethical, and that they in our opinion are in breach of European data protection legislation because they fail to fulfill the conditions for lawful data processing.” Android users are generally unaware of the fact that their Location History or Web & App Activity is enabled. Google uses a variety of dark patterns, to collect the exact location of the user, including the latitude (e.g. floor of the building) and mode of transportation, both outside and inside, to serve targeted advertising. Moreover, there is no real option to turn off Location History, only to pause it. Even if the user has kept Location History disabled, their location will still be shared with Google through Web & App Activity. “If you pause Location history, we make clear that — depending on your individual phone and app settings — we might still collect and use location data to improve your Google experience.” said a Google spokesman to Reuters. “These practices are not compliant with the General Data Protection Regulation (GDPR), as Google lacks a valid legal ground for processing the data in question. In particular, the report shows that users’ consent provided under these circumstances is not freely given,” BEUC, speaking on behalf of the countries’ consumer groups, said. Google claims to have a legitimate interest in serving ads based on personal data, but the fact that location data is collected, and how it is used, is not clearly expressed to the user. BEUC calls out Google saying that the company’s legitimate interest in serving advertising as part of its business model overrides the data subject’s fundamental right to privacy. BEUC argues that in light of how Web & App Activity is presented to users, the interests of the data subject should take precedence. Reuters asked for comment on the consumer groups’ complaints to a Google spokesman. According to them, “Location History is turned off by default, and you can edit, delete, or pause it at any time. If it’s on, it helps to improve services like predicted traffic on your commute. We’re constantly working to improve our controls, and we’ll be reading this report closely to see if there are things we can take on board,”. People are largely supportive of BEUC on the allegations they made on Google. https://www.youtube.com/watch?v=qIq17DeAc1M However, some people feel that it is just another attack on Google. If people voluntarily and most of them knowingly use these services and consent to giving personal information, it should not be a concern for any third party. “I can't help but think that there's some competitors' money behind these attacks on Google. They provide location services which you can turn off or delete yourself, which is anonymous to anyone else, and there's no evidence they sell your data (they just anonymously connect you to businesses you search for). Versus carriers which track you without an option to opt-in or out and actually do sell your data to 3rd parties.” “If the vast majority of customers don't know arithmetic, then yes, that's exactly what happened. Laws are a UX problem, not a theory problem. If most of your users end up getting deceived, you can't say "BUT IT WAS ALL RIGHT THERE IN THE SMALL PRINT, IT'S NOT MY FAULT THEY DIDN'T READ IT!". Like, this is literally how everything else works.” Read the full conversation on Hacker news. You may also go through the full “Every step you take” report published by BEUC for more information. Google employees join hands with Amnesty International urging Google to drop Project Dragonfly. Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK? Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax”
Read more
  • 0
  • 0
  • 12550

article-image-facebook-sets-aside-5-billion-in-anticipation-of-an-ftc-penalty-for-its-user-data-practices
Savia Lobo
25 Apr 2019
4 min read
Save for later

Facebook sets aside $5 billion in anticipation of an FTC penalty for its “user data practices”

Savia Lobo
25 Apr 2019
4 min read
Yesterday, Facebook in its first quarter financial reports revealed that it has to pay a sum of  $5 billion, a fine levied by the US Federal Trade Commission (FTC). This penalty is “in connection with the inquiry of the FTC into our platform and user data practices”, the company said. The company, in their report, mentioned that the expenses result in a 51% year-over-year decline in net income, to just $2.4bn. If they minus this one-time expense, Facebook’s earnings per share would have beaten analyst expectations, and its operating margin (22%) would have been 20 points higher. Facebook said, “We estimate that the range of loss in this matter is $3.0bn to $5.0bn. The matter remains unresolved, and there can be no assurance as to the timing or the terms of any final outcome.” In the wake of the Cambridge Analytica scandal, the FTC had commenced their investigation into Facebook’s privacy practices last year in March. This investigation was focussed whether the data practices that allowed Cambridge Analytica to obtain Facebook user data violated the company’s 2011 agreement with the FTC. “Facebook and the FTC have reportedly been negotiating over the settlement, which will dwarf the prior largest penalty for a privacy lapse, a $22.5m fine against Google in 2012”, The Guardian reports. Read Also: European Union fined Google 1.49 billion euros for antitrust violations in online advertising “Levying a sizable fine on Facebook would go against the reputation of the United States of not restraining the power of big tech companies”, The New York Times reports. Justin Brookman, a former official for the regulator who is currently a director of privacy at Consumers Union, nonprofit consumer advocacy group, said, “The F.T.C. is really limited in what they can actually do in enforcing a consent decree, but in the case of Facebook, they had public pressure on their side.” Christopher Wylie, a Research director at H&M and the Cambridge Analytica Whistleblower, voiced against Facebook by tweeting, “Facebook, you banned me for whistleblowing. You threatened @carolecadwalla and the Guardian. You tried to cover up your incompetent conduct. You thought you could simply ignore the law. But you can’t. Your house of cards will topple.” https://twitter.com/chrisinsilico/status/1121150233541525505 Senator Richard Blumenthal, Democrat of Connecticut, mentioned in a tweet, “Facebook must be held accountable — not just by fines — but also far-reaching reforms in management, privacy practices, and culture.” Debra Aho Williamson, an e-marketer analyst, warned that the expectation of an FTC fine may portend future trouble. “This is a significant development, and any settlement with the FTC may impact the ways advertisers can use the platform in the future,” she said. Jessica Liu, a marketing analyst for Forrester said that Facebook has to show signs that it’s improving on user data practices and content management. “Its track record has been atrocious. No more platitudes. What action is Facebook Inc actually taking?” “For Facebook, a $5 billion fine would amount to a fraction of its $56 billion in annual revenue. Any resolution would also alleviate some of the regulatory pressure that has been intensifying against the company over the past two and a half years”, the New York Times reports. To know more about this news in detail visit Facebook’s official press release. Facebook hires a new general counsel and a new VP of global communications even as it continues with no Chief Security Officer Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson “Is it actually possible to have a free and fair election ever again?,” Pulitzer finalist, Carole Cadwalladr on Facebook’s role in Brexit
Read more
  • 0
  • 0
  • 12547

article-image-pipelinedb-1-0-0-the-high-performance-time-series-aggregation-for-postgresql-released
Melisha Dsouza
25 Oct 2018
3 min read
Save for later

PipelineDB 1.0.0, the high performance time-series aggregation for PostgreSQL, released!

Melisha Dsouza
25 Oct 2018
3 min read
Three years ago, the PipelineDB team published the very first release of PipelineDB, as a fork of PostgreSQL. It received enormous support and feedback from thousands of organizations worldwide, including several Fortune 100 companies. It was highly requested that the fork be released as an extension of PostgreSQL. Yesterday, the team released PipelineDB 1.0.0 as a PostgreSQL extension under the liberal Apache 2.0 license. What is PipelineDB? PipelineDB can be used while storing huge amounts of time-series data that needs to be continuously aggregated. It only stores the compact output of these continuous queries as incrementally updated table rows, which can be evaluated with minimal query latency. It is used for analytics use cases that only require summary data, for instance, for real-time reporting dashboards. PipelineDB will sespeciallybe beneficial in scenarios where queries are known in advance. These queries can be run continuously in order to make the data infrastructure that powers these real time analytics applications simpler, faster, and cheaper as compared to the traditional “store first, query later” data processing model. How does PipelineDB work? PipelineDB uses SQL to write time-series events to a stream, which are also structured as tables. A continuous view is then used to perform an aggregation over this stream. Even if billions of rows are written to the stream, the continuous view ensures that only one physical row per hour is actually persisted within the database. Once the continuous view reads new incoming events and the distinct count is updated to reflect new information, the raw events will be discarded and not stored in PipelineDB. Which enables it to achieve: Enormous levels of raw event throughput on modest hardware footprints Extremely low read query latencies Traditional dependence between data volumes ingested and data volumes stored is broken All of this facilitates a high performance for the system which is sustained indefinitely. PipelineDB also supports another type of continuous queries  called ‘continuous transforms’. Continuous transforms are stateless and apply a transformation to a stream. They write out the result to another stream. Features of PipelineDB PipelineDB 1.0.0 has brought about some changes to version 0.9.7. The main highlights are as follows. Non-standard syntax has been removed. Configuration parameters are now qualified by pipelinedb. PostgreSQL pg_dump, pg_restore, and pg_upgrade tooling is now used instead of the PipelineDB variants Certain functions and aggregates are renamed to be descriptive about what problem they solve for the users . “Top-K” now represents Filtered-Space-Saving “Distributions” now refer to T-Digests “Frequency” now refers to Count-Min-Sketch Bloom filters introduced for set membership analysis Distributions and percentiles analysis is now possible What’s more? Continuous queries can be chained together into arbitrarily complex topologies of continuous computation. Each continuous query produces its own output stream of its incremental updates. This can be consumed by another continuous query as any other stream. The team aims to follow up with the functionality of automated partitioning for continuous views in the upcoming release. You can head over to the PipelineDb blog for more insights on this news. Citus Data to donate 1% of its equity to non-profit PostgreSQL organizations PostgreSQL 11 is here with improved partitioning performance, query parallelism, and JIT compilation PostgreSQL group releases an update to 9.6.10, 9.5.14, 9.4.19, 9.3.24
Read more
  • 0
  • 0
  • 12547

article-image-data-transfer-project-now-apple-joins-google-facebook-microsoft-and-twitter-to-make-data-sharing-seamless
Vincy Davis
01 Aug 2019
2 min read
Save for later

Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless

Vincy Davis
01 Aug 2019
2 min read
Yesterday, Data Transfer Project (DTP) updated on their website that Apple has officially joined the project as a contributor, along with other tech giants like Google, Facebook, Microsoft and Twitter. Read More: Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project The Data Transfer Project launched in 2018, is an open-source, service-to-service data portability platform which allows individuals to move their data across the web, whenever they want. The seamless transfer of data aims to give users more control of their data across the web. It’s tools will make it possible for users to port their music playlists, contacts or documents from one social network to another, without much effort. Currently, the DTP has 18 contributors. Their partners and open source community have inserted more than 42,000 lines of code and changed more than 1,500 files in the Project. Other alternative social networks like Deezer, Mastodon, and Solid have also joined the project. New Cloud logging and monitoring framework features and new APIs from Google Photos and Smugmug have also been added. The Data Transfer Project is still in the development stage, as its official site states that “We are continually making improvements that might cause things to break occasionally. So as you are trying things please use it with caution and expect some hiccups.” It's Github page has regular updates since its launch and has 2,480 stars, 209 forks and 187 watchers currently. Many users are happy that Apple has also joined the Project, as this means easy transfer of data for them. https://twitter.com/backlon/status/1156259766781394944 https://twitter.com/humancell/status/1156549440133632000 https://twitter.com/BobertHepker/status/1156352450875592704 Some users suspect that such projects will encourage unethical sharing of user data. https://twitter.com/zananeichan/status/1156416593913667585 https://twitter.com/sarahjeong/status/1156313114788241408 Visit the Data Transfer Project website for more details. Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Softbank announces a second AI-focused Vision Fund worth $108 billion with Microsoft, Apple as major investors Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ
Read more
  • 0
  • 0
  • 12537
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-has-the-eu-just-ended-the-internet-as-we-know-it
Richard Gall
13 Sep 2018
4 min read
Save for later

Has the EU just ended the internet as we know it?

Richard Gall
13 Sep 2018
4 min read
Yesterday (12 September), the EU Parliament voted through the EU Copyright Directive. This move will, according to critics, put an end to the open internet as know it. We reported on what the EU Copyright Directive means for the developer world earlier this week. Now, that world has been rocked significantly, with engineers, technologists and free speech advocates searching for solutions for what looks like a potentially devastating result. What happened in the EU Copyright Directive vote? Articles 11 and 13 were both crucial issues in this week's vote. They were the reason the directive was rejected back in July. The vote yesterday was on small amendments to these articles that, for the most part, keeps their intent intact. Article 11 has been described as a link tax - it effectively hands publishers control over who can link to their content and how, while article 13 has been criticised for enforcing 'copyright filters' on websites and platforms where users upload content. 438 MEPs voted in favorr of the directive; 226 against it. Why did MEPs vote in favor of the EU Copyright Directive? The EU Parliament press release provides a good indication of the thinking behind the directive. It would seem that the intent is too remove some of the power from large tech platforms - like Google and Facebook - and return some power to media companies and content producers that have been struggling in the digital age. The press release states: "Many of Parliament’s changes to the EU Commission’s original proposal aim to make certain that artists, notably musicians, performers and script authors, as well as news publishers and journalists, are paid for their work when it is used by sharing platforms such as YouTube or Facebook, and news aggregators such as Google News." Alongside this, there are a number of exemptions in the legislation that the EU Parliament argues will ensure none of the consequences its critics have suggested could happen will actually happen. For example: Small and micro platforms are excluded from the directive. Normal hyperlinks won't be impacted by article 11: the press release states that 'merely sharing hyperlinks to articles, together with “individual words” to describe them, will be free of copyright constraints'. Wikipedia and open source platforms like GitHub will be exempt. What happens next? There will be a final vote on the directive in January 2019. However, if this passes, the implementation of the legislation might vary at a national level. Individual EU countries could choose to enact the directive in whichever way they choose. Reaction to the vote The EU Copyright Directive has faced intense criticism since it first appeared back in 2016 - but with the vote yesterday, organizations and individuals have voiced their concern at the result. Julia Reda, MEP and member of the Pirate Party in Germany, who has been a vociferous opponent of the directive in Parliament, called it "a severe blow to the free and open internet." Similarly, the Electronic Frontier Foundation published a forthright post against the EU Parliament's decision: "We suffered a crushing setback today, but it doesn't change the mission. To fight, and fight, and fight, to keep the Internet open and free and fair, to preserve it as a place where we can organise to fight the other fights that matter, about inequality and antitrust, race and gender, speech and democratic legitimacy." The EFF also put together a letter addressed to Antonio Tajani, the President of the European Parliament. It was signed by some of the best known figures in technology, including Tim Berners-Lee, Guido van Rossum, and Jimmy Wales. The letter ends: "We support the consideration of measures that would improve the ability for creators to receive fair remuneration for the use of their works online. But we cannot support Article 13, which would mandate Internet platforms to embed an automated infrastructure for monitoring and censorship deep into their networks. For the sake of the Internet’s future, we urge you to vote for the deletion of this proposal." The fight isn't over yet, but you can sense palpable fear in many quarters about what this means for the future of the internet as we know it.
Read more
  • 0
  • 0
  • 12534

article-image-google-compute-engine-plugin-makes-it-easy-to-use-jenkins-on-google-cloud-platform
Savia Lobo
15 May 2018
2 min read
Save for later

Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform

Savia Lobo
15 May 2018
2 min read
Google recently announced the Google Compute Engine Plugin for Jenkins, which helps to provision, configure and scale Jenkins build environments on Google Cloud Platform (GCP). Jenkins is one of the most popular tools for Continuous Integration(CI), a standard practice carried out by many software organizations. CI assists in automatically detecting changes that were committed to one’s software repositories, running them through unit tests, integration tests and functional tests, to finally create an artifact (JAR, Docker image, or binary). Jenkins helps one to define, build and test a process, then run it continuously against the latest software changes. However, as one scales up their continuous integration practice, one may need to run builds across fleets of machines rather than on a single server. With the Google Compute Engine Plugin, The DevOps teams can intuitively manage instance templates and launch build instances that automatically register themselves with Jenkins. The plugin automatically deletes one’s unused instances, once work in the build system has slowed down,so that one only pays for the instances needed. One can also configure the Google Compute Engine Plugin to create build instances as Preemptible VMs, which can save up to 80% on per-second pricing of builds. One can attach accelerators like GPUs and Local SSDs to instances to run builds faster. One can configure build instances as per their choice, including the networking. For instance: Disable external IPs so that worker VMs are not publicly accessible Use Shared VPC networks for greater isolation in one’s GCP projects Apply custom network tags for improved placement in firewall rules One can improve security risks present in CI using the Compute Engine Plugin as it uses the latest and most secure version of the Jenkins Java Network Launch Protocol (JNLP) remoting protocol. One can create an ephemeral build farm in Compute Engine while keeping Jenkins master and other necessary build dependencies behind firewall while using Jenkins on-premises. Read more about the Compute Engine Plugin in detail, on the Google Research blog. How machine learning as a service is transforming cloud Polaris GPS: Rubrik’s new SaaS platform for data management applications Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 12530

article-image-toolbox-when-intellisense-doesnt-see-your-new-object-from-blog-posts-sqlservercentral
Anonymous
11 Dec 2020
2 min read
Save for later

Toolbox - When Intellisense Doesn't See Your New Object from Blog Posts - SQLServerCentral

Anonymous
11 Dec 2020
2 min read
I was just working on a new SQL job, and part of creating the job was adding a few new tables to our DBA maintenance database to hold data for the job.  I created my monitoring queries, and then created new tables to hold that data  One tip - use SELECT...INTO as an easy way to create these types of tables - create your query and then add a one-time INTO clause to create the needed object with all of the appropriate column names, etc. https://i.redd.it/1wk7ki3wtet21.jpg SELECT DISTINCT SERVERPROPERTY('ServerName') as Instance_Name , volume_mount_point as Mount_Point , cast(available_bytes/1024.0/1024.0/1024.0 as decimal(10,2)) as Available_GB , cast(total_bytes/1024.0/1024.0/1024.0 as decimal(10,2)) as Total_GB , cast((total_bytes-available_bytes)/1024.0/1024.0/1024.0 as decimal(10,2)) as Used_GB , cast(100.0*available_bytes/total_bytes as decimal(5,2)) as Percent_Free , GETDATE() as Date_Stamp INTO Volume_Disk_Space_Info FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS vfs       INNER JOIN sys.master_files AS mf WITH (NOLOCK)       ON vfs.database_id = mf.database_id AND vfs.file_id = mf.file_id   CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID)    order by volume_mount_point I thought at this point that everything was set, until I tried to write my next statement... The dreaded Red Squiggle of doom! I tried to use an alias to see if Intellisense would detect that - no luck. Some Google-Fu brought me to the answer on StackOverflow - there is an Intellisense cache that sometimes needs to be refreshed. The easiest way to refresh the cache is simply a CTRL-SHIFT-R, but there is also a menu selection in SSMS to perform the refresh: Edit>>Intellisense>>Refresh Local Cache In my case, once I performed the CTRL-SHIFT-R, the red squiggles disappeared! https://memegenerator.net/img/instances/61065657/you-see-its-magic.jpg Hope this helps! The post Toolbox - When Intellisense Doesn't See Your New Object appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 12526

article-image-facebook-open-sources-hyperparameter-autotuning-for-fasttext-to-automatically-find-best-hyperparameters-for-your-dataset
Amrata Joshi
27 Aug 2019
3 min read
Save for later

Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset

Amrata Joshi
27 Aug 2019
3 min read
Two years ago, the team at Facebook AI Research (FAIR) lab open-sourced fastText, a library that is used for building scalable solutions for text representation and classification. To make models work efficiently on datasets with large number of categories, finding the best hyperparameters is crucial. However, searching the best hyperparameters manually is difficult as the effect of each parameter varies from one dataset to another. For this, Facebook has developed an autotune feature in FastText that automatically finds the best hyperparameters for your dataset. Yesterday, they announced that they are open-sourcing the Hyperparameter autotuning feature for fastText library.  What are hyperparameters? Hyperparameters are the parameter whose values are fixed before the training process begins. They are the critical components of an application and they can be tuned in order to control how a machine learning algorithm behaves. Hence it is important to search for the best hyperparameters as the performance of an algorithm can be majorly dependent on the selection of these hyperparameters. The need for Hyperparameter autotuning It is difficult and time-consuming to search for the best hyperparameters manually, even for expert users. This new feature makes this task easier by automatically determining the best hyperparameters for building an efficient text classifier. A researcher can input the training data, a validation set and a time constraint to use autotuning. The researcher can also constrain the size of the final model with the help of compression techniques in fastText. Building a size-constrained text classifier can be useful for even deploying models on devices or in the cloud such that it becomes easier to maintain a small memory footprint.  With Hyperparameter autotuning, researchers can now easily build a memory-efficient classifier that can be used for various tasks, including language identification, sentiment analysis, tag prediction, spam detection, and topic classification. The team’s strategy of exploring various hyperparameters is inspired by existing tools, such as Nevergrad, but has been tailored to fastText for using the specific structure of models. The autotune feature explores hyperparameters by initially sampling in a large domain that shrinks around the best combinations over time.  It seems that this new feature could possibly be a competitor to Amazon SageMaker Automatic Model Tuning. In Amazon’s model, however, the user needs to select the hyperparameters required to be tuned, a range for each parameter to explore, and also the total number of training jobs. While Facebook’s Hyperparameter autotuning automatically selects the hyperparameters.  To know more about this news, check out Facebook’s official blog post. Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans    
Read more
  • 0
  • 0
  • 12526
article-image-magicleap-acquires-computes-inc-to-enhance-spatial-computing
Sugandha Lahoti
08 Oct 2018
2 min read
Save for later

MagicLeap acquires Computes Inc to enhance spatial computing

Sugandha Lahoti
08 Oct 2018
2 min read
MagicLeap has announced that they are acquiring Computes Inc to bring in advancements in the field of computing. Computess Inc had already been working in the field of decentralized mesh computing. With MagicLeap, they will now be bringing their platform to the world of spatial computing. “With over 27 billion connected devices in the world today (and growing), the Computes platform provides us with the necessary building blocks to make spatial computing available to everyone,” said Gus Pinto, VP of Product and Engineering. Computes Inc’s decentralized processing unit (DPU) orchestrates sophisticated machine learning algorithms, massively parallel computations, and large datasets in a peer-to-peer (P2P) fashion. Their services are available across datacenter, cloud, edge network, operating system, mobile or IoT device, and web browsers. With this new collaboration, Computes Inc will be working on developing a new set of computing services for developers, creators, enterprises, and end-users to help them leverage the power of spatial computing on any platform. MagicLeap will be using Computes Inc’s mesh computing to power their Computes heavy Mixed Reality and Augmented Reality services by grouping systems to push resources to the devices that need it most. With Computes Inc on board, Magic Leap may also expand its augmented reality technology to other devices. Chris Matthieu, one of the founders of Computes Inc talks about this collaboration stating, “As you know, Jade and I started Computes, Inc. based on the principle of enabling the next generation of computing, and we believe Magic Leap is the perfect home for us to achieve this vision.” Read more about the announcement on Magic Leap. Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality. Understanding the hype behind Magic Leap’s New Augmented Reality Headsets. Magic Leap One, the first mixed reality headsets by Magic Leap, is now available at $2295.
Read more
  • 0
  • 0
  • 12516

article-image-what-can-we-expect-from-tensorflow-2-0
Savia Lobo
17 Sep 2018
3 min read
Save for later

What can we expect from TensorFlow 2.0?

Savia Lobo
17 Sep 2018
3 min read
Last month, Google announced that the TensorFlow community plans to release a preview of TensorFlow 2.0, later this year. However, the date for the preview release has not been disclosed yet. The 2.0 version will include major highlights such as improved eager execution, improved compatibility, support for more platforms and languages, and much more. Key highlights in Tensorflow 2.0 Eager execution would be an important feature of TensorFlow 2.0. It aids in aligning users’ expectations about the programming model better, with TensorFlow practice. This will thus make TensorFlow easier to learn and apply. This version includes a support for more platforms and languages. It will provide an improved compatibility and parity between these components via standardization on exchange formats and alignment of APIs. The community plans to remove deprecated APIs and reduce the amount of duplication, which has caused confusion for users. Other improvements in TensorFlow 2.0 Increased Compatibility and continuity TensorFlow 2.0  would be an opportunity to correct mistakes and to make improvements which are otherwise restricted under semantic versioning. The community plans to create a conversion tool which updates the Python code to use TensorFlow 2.0 compatible APIs, to ease the transition for users. This tool will also warn in cases where conversion is not possible automatically. A similar tool helped tremendously during the transition to 1.0. As not all changes can be made fully, automatically, the community plans to deprecate APIs, some of which do not have a direct equivalent. For such cases, they will offer a compatibility module (tensorflow.compat.v1) which contains the full TensorFlow 1.x API, and will be maintained through the lifetime of TensorFlow 2.x. On-disk compatibility The community would not be making any breaking changes to SavedModels or stored GraphDefs repositories. This means they will include all current kernels in 2.0 (i.e., we plan to include all current kernels in 2.0). However, the changes in 2.0 will mean that variable names in raw checkpoints might have to be converted before being compatible with new models. Improvements to tf.contrib As part of releasing TensorFlow 2.0, the community will stop distributing tf.contrib. For each of the contrib modules they plan to  either: integrate the project into TensorFlow, move it to a separate repository, or remove it entirely. This means that all of tf.contrib will be deprecated, and the community will stop adding new tf.contrib projects. Following is a YouTube video by Aurélien Géron explaining the changes in TensorFlow 2.0 in detail. https://www.youtube.com/watch?v=WTNH0tcscqo Understanding the TensorFlow data model [Tutorial] TensorFlow announces TensorFlow Data Validation (TFDV) to automate and scale data analysis, validation, and monitoring Intelligent mobile projects with TensorFlow: Build your first Reinforcement Learning model on Raspberry Pi [Tutorial]
Read more
  • 0
  • 0
  • 12516

article-image-openfaas-releases-full-support-for-stateless-microservices-in-openfaas-0-9-0
Melisha Dsouza
07 Sep 2018
4 min read
Save for later

OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0

Melisha Dsouza
07 Sep 2018
4 min read
OpenFaaS announced on the 5th of September 2018 that they have released support for stateless microservices in OpenFaaS 0.9.0. They assert that managing FaaS functions and microservices will now be easier. A stateless microservice can be deployed as if it were a FaaS Function and managed by a FaaS framework or Platform such as OpenFaaS. Hence, no special routes, flags or filters are needed in the OpenFaaS CLI, Gateway API or UI. Source: OpenFaaS The upgrade came as a follow-up to two requests from the microservices community. One of the users at Wireline.io raised a feature request to enhance the HTTP route functionality of functions and write functions to run on both, AWS Lambda and on OpenFaaS, without any additional changes. Then came the request from the CEO of GitLab, Sid Sijbrandi who wanted to learn more about Serverless and how it could benefit Gitlab. He was apprehensive whether OpenFaaS could be used to manage both, FaaS Functions and the microservices his team was more familiar (eg. Sinatra apps). He wanted to know more about scaling to zero when idle. To address these requests, the OpenFaaS blog has given its viewers an example of deploying a Ruby and Sinatra guestbook backed by MySQL deployed to OpenFaaS with Kubernetes. This is how the task can be done- Users have to start of by creating the Sinatra stateless microservices. They can then go on to create a hello-world service by supplying their own Dockerfile and executing the following commands $ mkdir -p sinatra-for-OpenFaaS/ \  && cd sinatra-for-OpenFaaS/ $ faas-cli new --prefix=alexellis2 --lang dockerfile frank-says They need to replace alexellis2 with their Docker Hub account or another Docker registry. This has to be followed by creating a Gemfile and the main.rb file: ./frank-says/main.rb: require 'sinatra' set :port, 8080 set :bind, '0.0.0.0' open('/tmp/.lock', 'w') { |f|  f.puts "Service started" } get '/' do  'Frank has entered the building' end get '/logout' do  'Frank has left the building' End   Things to note on OpenFaaS workloads while doing this- Bind to TCP port 8080 Write a file /tmp/.lock when ready to receive traffic The Dockerfile will add a non-root user, add the Ruby source and Gemfile then installs the Sinatra gem. Finally, it will add a healthcheck on a 5-second interval and set the start-up command. Users can now deploy the example using the OpenFaaS CLI. Login with account details $ docker login Run the up command which is an alias for build, push and deploy. $ faas-cli up --yaml frank-says.yml Deploying: frank-says. Deployed. 200 OK. URL: http://127.0.0.1:8080/function/frank-says To Deploy the Sinatra guestbook with MySQL, they need to execute- $ git clone https://github.com/OpenFaaS-incubator/OpenFaaS-sinatra-guestbook \  && cd OpenFaaS-sinatra-guestbook Configure MySQL database details in ./sql.yml. $ cp sql.example.yml sql.yml Finally deploy the guestbook: $ faas-cli up http://127.0.0.1:8080/function/guestbook The  URL given by the command above should be used to access the microservice. Now, Sign the guest book using the UI and then reset the MySQL table at any time by posting to /function/guestbook/reset. Source: OpenFaaS The guestbook code stores its state in a MySQL table. A key property of FaaS functions and stateless microservices is that it can be restarted at any time without losing data. For a detailed implementation of the guestbook example, head over to the OpenFaaS Blog post How to Enable Zero-Scale? To enable scaling to zero simply follow the documentation Next, users have to add a label to their stack.yml file to tell OpenFaaS that your function is eligible for zero-scaling: labels:      com.OpenFaaS.scale.zero: true Finally, redeploy the guestbook with faas-cli up. The faas-idler will now scale the function to zero replicas as soon as it is detected as idle. The default idle period is set at 5 minutes, which can be configured at deployment time. OpenFaaS has also deployed a stateless microservice written in Ruby that will scale to zero when idle and back again in time to serve traffic. It can be managed in exactly the same way as OpenFaaS existing FaaS functions. Thus, we have seen how the support for stateless microservices has made it easier for users to manage their microservices easily. Head over to the OpenFaaS blog for a detailed explanation of deploying a simple hello-world Sinatra service and to gain more insights about the upgrade. 6 Ways to blow up your Microservices! Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js
Read more
  • 0
  • 0
  • 12515
article-image-the-ember-project-releases-version-3-5-of-ember-js-ember-data-and-ember-cli
Bhagyashree R
16 Oct 2018
3 min read
Save for later

The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI

Bhagyashree R
16 Oct 2018
3 min read
After the release of Ember 3.4 earlier this month, the Ember project released version 3.5 of the three core sub-projects: Ember.js, Ember Data, and Ember CLI. This release boasts of up to 32% performance improvement in Ember CLI build and a new Ember Data which powers the addon developers. This version also kicks off the 3.6 beta cycle for all the three sub-projects. Additionally, Ember 3.4 is now promoted to LTS, which stands for Long Term Support. This means that Ember will continue to receive security updates for 9 release cycles and bug fixes for 6 cycles. Let’s now explore what updates have been added in this release: Updates in Ember.js 3.5 This version is an incremental and backwards compatible release with two small bug fixes. These bug fixes pave the way for new features in future releases. The following bugs are fixed in this release: In some cases Alias wouldn't teardown properly leaving unbalanced watch count in meta. This is now fixed. Naming routes as "array" and "object" is allowed. Updates in Ember Data 3.5 This release hits two milestones: the very first LTS release of ember-data and the RecordData interfaces. RecordData RecordData provides addon developers the much-needed API access with more confidence and stability. This new addition will facilitate developers to easily implement many commonly requested features such as improved dirty-tracking, fragments, and alternative Models in addons. With this new feature added, the Ember developers are thinking of deprecating and removing the use of private but intimate InternalModel API. Also, be warned that this change might cause some regressions in your applications. RecordData use with ModelFragments Most of the community addons work with RecorData versions of ember-data, but currently it does not work with ember-data-model-fragments. In case you are using this addon, it is advisable to stay on ember-data 3.4 LTS until the community has released a version compatible with RecordData. Updates in Ember CLI 3.5 Three new features have been added in this Ember CLI 3.5: Upgraded to Broccoli v2.0.0 Earlier, tools in the Ember Ecosystem relied on a fork of Broccoli. But, from this release, Ember CLI uses Broccoli 2.0 directly. Build speed improvements up to 32% Now, developers will see some speed improvements in their builds, thanks to Broccoli 2! Broccoli 2 allows Ember CLI to use the default system temp directory, instead of a ./tmp directory local to a project folder. Users may see up to 32% improvements in build time, depending on computer hardware. Migrated to ember-qunit As all of the main functionality lives in ember-qunit while ember-cli-qunit is just a very thin shim over ember-qunit, Ember CLI is migrated to ember-qunit. It now uses ember-qunit directly and ultimately ember-cli-qunit would become deprecated. To read the full list of updates, check out the official announcement by the Ember community. The Ember project announces version 3.4 of Ember.js, Ember Data, and Ember CLI Ember project releases v3.2.0 of Ember.js, Ember Data, and Ember CLI Getting started with Ember.js – Part 1
Read more
  • 0
  • 0
  • 12512

article-image-googles-new-chrome-extension-password-checkup-checks-if-your-username-or-password-has-been-exposed-to-a-third-party-breach
Melisha Dsouza
06 Feb 2019
2 min read
Save for later

Google’s new Chrome extension ‘Password CheckUp’ checks if your username or password has been exposed to a third party breach

Melisha Dsouza
06 Feb 2019
2 min read
Google released a new Chrome extension on Tuesday, called the  ‘Password CheckUp’. This extension will inform users if the username and password that they are currently using was stolen in any data breaches. It then sends a prompt for them to reset their password. If a user’s Google account credentials have been exposed in a third-party data breach, the company automatically resets their passwords. The new Chrome extension will ensure the same level of protection to all services on the web. On installing, Password Checkup will appear in the browser bar as a green shield. The extension will then check the login details against a database of around four billion usernames and passwords. If a match is found, a dialogue box prompting users to “Change your password” will appear and the icon will turn bright red. Source: Google Password Checkup was designed by Google along with cryptography experts at Stanford University, keeping in mind that Google should not be able to capture a user’s credentials, to prevent a “wider exposure” of the situation. Google’s blog states “We also designed Password Checkup to prevent an attacker from abusing Password Checkup to reveal unsafe usernames and passwords.”   Password Checkup uses multiple rounds of hashing, k-anonymity, private information retrieval, and a technique called blinding to achieve encryption of the user’s credentials. You can check out Google’s blog for technical details on the extension. Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations Meet Carlo, a web rendering surface for Node applications by the Google Chrome team Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications
Read more
  • 0
  • 0
  • 12506
Modal Close icon
Modal Close icon