Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - AI Tools

89 Articles
article-image-introduction-to-gen-ai-studio
Anubhav Singh
07 Sep 2023
6 min read
Save for later

Introduction to Gen AI Studio

Anubhav Singh
07 Sep 2023
6 min read
In this article, we’ll explore the basics of Generative AI Studio and how to run a language model within this suite with practical example. Generative AI Studio is the all-encompassing offering of generative AI-based services on Google Cloud. It includes models of different types, allowing users to generate content that may be - text, image, or audio. On the Generative AI Studio, or Gen AI Studio, users can rapidly prototype and test different types of prompts associated with the different types of models to figure out which parameters and settings work best for their use cases. Then, they can easily shift the tested configurations to the code bases of their solutions. Model Garden on the other hand provides a collection of foundation and customized generative AI models which can be used directly as models in code or as APIs. The foundation models are based on the models that have been trained by Google themselves, whereas the fine-tuned/task-specific models include models that have been developed and trained by third parties. Gen AI Studio  Packaged within Vertex AI, the Generative AI Studio on Google Cloud Platform provides low-code solutions for developing and testing invocations over Google’s AI models that can then be used within customer’s solutions. As of August 2023, the following solutions are a part of the Generative AI Studio -  Language: Models used to generate text-based responses. The models may be generating answers to questions, performing classification, recognizing sentiment, or anything that involves text understanding. Vision: The models are used to generate images/visual content with different types of drawing styles Speech: The speech models perform either speech-to-text conversation or text-to-speech conversion. Let’s explore each one of these in detail. The language models in Gen AI studio are based on the PaLM 2 for Text models and are currently in the form of either “text-bison” or “chat-bison”. The first type of model is the base model which allows performing any kind of tasks related to text understanding and generation. “Chat-bison” models on the other hand are focused on providing a conversational interface for interacting with the model. Thus, they are more suitable for tasks that require a conversation to happen between the model user and the model. There’s another form of the PaLM2 models available as “code-bison” which powers the Codey product suite. This deals with programming languages instead of human languages. Let’s take a look at how we can use a language model in Gen AI Studio. Follow the steps below: 1. First, head over to https://console.cloud.google.com/vertex-ai/generative on your browser with a Billing enabled Google Cloud account. You will be able to see the Generative AI Studio dashboard.   2. Next, click “Open” in the card titled “Language”. 3. Then, click on “Text Prompt” to open the prompt builder interface. The interface should look similar to the image below, however, being an actively developed product, it may change in several ways in the future.   4. Now, let us write a prompt. For our example, we’ll instruct the model to fact check whatever is passed to it. Here’s a sample prompt: You're a Fact Checker Bot. Whatever the user says, fact check it and say any of the following:  1. "This is a fact" if the statement by the user is a true fact. 2. "This is not a fact" if the user's statement is not classifiable as a fact. 3. "This is a myth" if the user's state is a false fact. User:  5. Now, write the user’s part as well and hit the Submit button. The last line of the prompt would now be:  User: I am eating an apple.6. Observe the response. Then, change the user’s part to “I am an apple” and “I am a human”. Observe the response in each case. The following output table is expected: Once we’re satisfied with the model responses based on our prompt, we can shift the model invocation to code. In our example, we’ll do it on Google Colaboratory. Follow the steps below: 1. Open Google Colaboratory by visiting: https://colab.research.google.com/ 2. In the first cell, we’ll install the required libraries for using Gen AI Studio models %%capture  !pip install "shapely<2.0.0"  !pip install google-cloud-aiplatform --upgrade  3. Next, we’ll authenticate the Colab notebook to be able to access the resources available on Google Cloud to the currently logged in user. from google.colab import auth as google_auth  google_auth.authenticate_user() 4. Then we import the required libraries. import vertexai  from vertexai.language_models import TextGenerationModel  5. Now, we instantiate the VertexAI client to work with the project. Take note to replace the PROJECT_ID with your own project’s ID on Google Cloud vertexai.init(project=PROJECT_ID, location="us-central1")  6. Let us now set the configurations that the model will use while answering to our prompts and initialize the model client parameters = {      "candidate_count": 1,      "max_output_tokens": 256,      "temperature": 0,      "top_p": 0.8,      "top_k": 40  }  model = TextGenerationModel.from_pretrained("text-bison@001")  7. Now, we can call the model and observe the response by printing it response = model.predict(      """You\'re a Fact Checker Bot. Whatever the user says, fact check it and say any of the following: 1. \"This is a fact\" if the statement by the user is a true fact.  2. \"This is not a fact\" if the user\'s statement is not classifiable as a fact.  3. \"This is a myth\" if the user\'s state is a false fact.  User: I am a human""",      **parameters  )  print(f"Response from Model: {response.text}")  You can similarly work with the other models available in Gen AI Studio. In this notebook, we’ve provided an example each of Language, Vision and Speech model usage: GenAIStudio&ModelGarden.ipynb  Author BioAnubhav Singh, Co-founder of Dynopii & Google Dev Expert in Google Cloud, is a  seasoned developer since the pre-Bootstrap era, Anubhav has extensive experience as a freelancer and AI startup founder. He authored "Hands-on Python Deep Learning for Web" and "Mobile Deep Learning with TensorFlow Lite, ML Kit, and Flutter." A Google Developer Expert in GCP, he co-organizes for TFUG Kolkata community and formerly led the team at GDG Cloud Kolkata. Anubhav is often found discussing System Architecture, Machine Learning, and Web technologies.
Read more
  • 0
  • 0
  • 16419

article-image-sentiment-analysis-with-generative-ai
Sangita Mahala
09 Nov 2023
8 min read
Save for later

Sentiment Analysis with Generative AI

Sangita Mahala
09 Nov 2023
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionThe process of detecting and extracting emotion from text is referred to as sentiment analysis. It's a powerful tool that can help understand the views of consumers, look at brand ratings, and find satisfaction with customers. Genetic AI models like GPT 3, PaLM, and Bard can change the way we think about sentiment analysis. It is possible to train these models with the aim of understanding human language's nuances, and detecting sentiment in any complicated or subtle text.Benefits of using generative AI for sentiment analysisThere are several benefits to using generative AI for sentiment analysis, including:Accuracy: In the area of sentiment analysis, neural AI models are capable of achieving very high accuracy. It is because of their ability to learn the intricate patterns and relationships between words and phrases which have different meanings.Scalability: Generative AI models can be scaled to analyze large volumes of text quickly and efficiently. It is of particular importance for businesses and organizations that need to process large quantities of feedback from customers or are dealing with Social Media data.Flexibility: In order to take into account the specific needs of different companies and organizations, genetic AI models may be adapted. A model may be trained to determine the feelings of customer assessments, Social Media posts, or any other type of news.How to use generative AI for sentiment analysisThere are two main ways to use generative AI for sentiment analysis:Prompt engineering: Prompt engineering is the process of designing prompts, which are used to guide generative AI models in generating desired outputs. For example, the model might be asked to state "the following sentence as positive, negative, or neutral: I'm in love with this new product!"Fine-tuning: Finetuning refers to a process of training the generative AI model on some particular text and label data set. This is why the model will be able to discover special patterns and relationships associated with various emotions in that data set.Hands-on examplesIn this example, we will use the PaLM API to perform sentiment analysis on a customer review.Example - 1Input:import nltk from nltk.sentiment.vader import SentimentIntensityAnalyzer # Download the VADER lexicon for sentiment analysis (run this once) nltk.download('vader_lexicon') def analyze_sentiment(sentence):    # Initialize the VADER sentiment intensity analyzer    analyzer = SentimentIntensityAnalyzer()      # Analyze the sentiment of the sentence    sentiment_scores = analyzer.polarity_scores(sentence)      # Determine the sentiment based on the compound score    if sentiment_scores['compound'] >= 0.05:        return 'positive'    elif sentiment_scores['compound'] <= -0.05:        return 'negative'    else:        return 'neutral' # Example usage with a positive sentence positive_sentence = "I am thrilled with the results! The team did an amazing job!" sentiment = analyze_sentiment(positive_sentence) print(f"Sentiment: {sentiment}")Output:Sentiment: positiveIn order to analyze the emotion in a particular sentence we have created a function that divides it into categories based on its sentiment score and labels it as positive, unfavorable, or neutral. For example, a positive sentence is analyzed and the results show a "positive" sentiment.Example - 2Input:import nltk from nltk.sentiment.vader import SentimentIntensityAnalyzer # Download the VADER lexicon for sentiment analysis (run this once) nltk.download('vader_lexicon') def analyze_sentiment(sentence):    # Initialize the VADER sentiment intensity analyzer    analyzer = SentimentIntensityAnalyzer()      # Analyze the sentiment of the sentence    sentiment_scores = analyzer.polarity_scores(sentence)      # Determine the sentiment based on the compound score    if sentiment_scores['compound'] >= 0.05:        return 'positive'    elif sentiment_scores['compound'] <= -0.05:        return 'negative'    else:        return 'neutral' # Example usage with a negative sentence negative_sentence = "I am very disappointed with the service. The product didn't meet my expectations." sentiment = analyze_sentiment(negative_sentence) print(f"Sentiment: {sentiment}")Output:Sentiment: negativeWe have set up a function to evaluate the opinions of some sentences and then classify them according to their opinion score, which we refer to as Positive, Negative, or Neutral. To illustrate this point, an analysis of the negative sentence is carried out and the output indicates a "negative" sentiment.Example - 3Input:import nltk from nltk.sentiment.vader import SentimentIntensityAnalyzer # Download the VADER lexicon for sentiment analysis (run this once) nltk.download('vader_lexicon') def analyze_sentiment(sentence):    # Initialize the VADER sentiment intensity analyzer    analyzer = SentimentIntensityAnalyzer()    # Analyze the sentiment of the sentence    sentiment_scores = analyzer.polarity_scores(sentence)    # Determine the sentiment based on the compound score    if sentiment_scores['compound'] >= 0.05:        return 'positive'    elif sentiment_scores['compound'] <= -0.05:        return 'negative'    else:        return 'neutral' # Example usage sentence = "This is a neutral sentence without any strong sentiment." sentiment = analyze_sentiment(sentence) print(f"Sentiment: {sentiment}")Output:Sentiment: neutralFor every text item, whether it's a customer review, social media post or news report, the PaLM API can be used for sentiment analysis. To do this, select a prompt that tells your model what it wants to be doing and then request API by typing in the following sentence as positive, negative, or neutral. A prediction that you can print to the console or use in your application will then be generated by this model.Applications of sentiment analysis with generative AISentiment analysis with generative AI can be used in a wide variety of applications, including:Customer feedback analysis: Generative AI models can be used to analyze customer reviews and feedback to identify trends and areas for improvement.Social media monitoring: Generative AI models can be used to monitor social media platforms for brand sentiment and public opinion.Market research: In order to get a better understanding of customer preferences and find new opportunities, Generative AI models may be used for analyzing market research data.Product development: To identify new features and improvements, the use of neural AI models to analyze feedback from customers and product reviews is possible.Risk assessment: Generative AI models can be used to analyze financial and other data to assess risk.Challenges of using generative AI for sentiment analysisWhile generative AI has the potential to revolutionize sentiment analysis, there are also some challenges that need to be addressed:Requirements for data: Generative AI models require large amounts of training data to be effective. This can be a challenge for businesses and organizations that do not have access to large datasets.Model bias: Due to the biases inherent in the data they're trained on, generative AI models can be biased. This needs to be taken into account, as well as steps to mitigate it.Interpretation difficulties: The predictions of the generative AI models can be hard to understand. This makes it difficult to understand why a model made such an estimate, as well as to believe in its results.ConclusionThe potential for Generative AI to disrupt sentiment analysis is enormous. The generative AI models can achieve very high accuracy, scale to analyze a large volume of text, and be custom designed in order to meet the specific needs of different companies and organizations. In order to analyze sentiment for a broad range of text data, generative AI models can be used if you use fast engineering and quality tuning.A powerful new tool that can be applied to understand and analyze human language in new ways is sentiment analysis with generative artificial intelligence. As models of generative AI are improving, we can expect that this technology will be applied to a wider variety of applications and has an important impact on our daily lives and work.Author BioSangita Mahala is a passionate IT professional with an outstanding track record, having an impressive array of certifications, including 12x Microsoft, 11x GCP, 2x Oracle, and LinkedIn Marketing Insider Certified. She is a Google Crowdsource Influencer and IBM champion learner gold. She also possesses extensive experience as a technical content writer and accomplished book blogger. She is always Committed to staying with emerging trends and technologies in the IT sector.
Read more
  • 0
  • 0
  • 16355

article-image-metagpt-cybersecuritys-impact-on-investment-choices
James Bryant, Alok Mukherjee
13 Dec 2023
9 min read
Save for later

MetaGPT: Cybersecurity's Impact on Investment Choices

James Bryant, Alok Mukherjee
13 Dec 2023
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, The Future of Finance with ChatGPT and Power BI, by James Bryant, Alok Mukherjee. Enhance decision-making, transform your market approach, and find investment opportunities by exploring AI, finance, and data visualization with ChatGPT's analytics and Power BI's visuals.IntroductionThe MetaGPT model is a highly advanced and customizable model that has been designed to address specific research and analysis needs within various domains. In this particular context, it’s geared towards identifying investment opportunities within the US market that are influenced by cybersecurity regulatory changes or cyber breaches.Roles and responsibilitiesThe model has been configured to perform various specialized roles, including these:Cybersecurity regulatory research: Understanding changes in cybersecurity laws and regulations and their impact on the marketCyber breach analysis: Investigating cyber breaches, understanding their nature, and identifying potential investment risks or opportunitiesInvestment analysis: Evaluating investment opportunities based on insights derived from cybersecurity changesTrading decisions: Making informed buy or sell decisions on financial productsPortfolio management: Overseeing and aligning the investment portfolio based on cybersecurity dynamics Here’s how it worksResearch phase: The model initiates research on the given topics, either cybersecurity regulations or breaches, depending on the role. It breaks down the topic into searchable queries, collects relevant data, ranks URLs based on credibility, and summarizes the gathered information.Analysis phase: Investment analysts then evaluate the summarized information to identify trends, insights, and potential investment opportunities or risks. They correlate cybersecurity data with market behavior, investment potential, and risk factors.Trading phase: Based on the analysis, investment traders execute appropriate trading decisions, buying or selling assets that are influenced by the cybersecurity landscape.Management phase: The portfolio manager integrates all the insights to make overarching decisions about asset allocation, risk management, and alignment of the investment portfolio.The following are its purposes and benefits:Timely insights: By automating the research and analysis process, the model provides quick insights into a dynamic field such as cybersecurity, where changes can have immediate market impactsData-driven decisions: The model ensures that investment decisions are grounded in comprehensive research and objective analysis, minimizing biasCustomization: The model can be tailored to focus on specific aspects of cybersecurity, such as regulatory changes or particular types of breaches, allowing for targeted investment strategiesCollaboration: By defining different roles, the model simulates a collaborative approach, where various experts contribute their specialized knowledge to achieve a common investment goalIn conclusion, the MetaGPT model, with its diverse roles and sophisticated functions, serves as a powerful tool for investors looking to leverage the ever-changing landscape of cybersecurity. By integrating research, analysis, trading, and portfolio management, it provides a comprehensive, datadriven approach to identifying and capitalizing on investment opportunities arising from the complex interplay of cybersecurity and finance. It not only streamlines the investment process but also enhances the accuracy and relevance of investment decisions in a rapidly evolving field.Source: GitHub: MIT License: https://github.com/geekan/MetaGPT.Source: MetaGPT: Meta Programming for Multi-Agent Collaborative Framework Paper:[2308.00352] MetaGPT: Meta Programming for Multi-Agent Collaborative Framework (arxiv.org) (https://arxiv.org/abs/2308.00352)By Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin WuThe following is a Python code snippet:1. Begin with the installations:npm --version sudo npm install -g @mermaid-js/mermaid-cli git clone https://github.com/geekan/metagpt cd metagpt python setup.py install2.  Run the following Python code:# Configuration: OpenAI API Key # Open the config/key.yaml file and insert your OpenAI API key in place of the placeholder. # cp config/config.yaml config/key.yaml # save and close file # Import Necessary Libraries import asyncio import json from typing import Callable from pydantic import parse_obj_as # Import MetaGPT Specific Modules from metagpt.actions import Action from metagpt.config import CONFIG from metagpt.logs import logger from metagpt.tools.search_engine import SearchEngine from metagpt.tools.web_browser_engine import WebBrowserEngine, WebBrowserEngineType from metagpt.utils.text import generate_prompt_chunk, reduce_ message_length # Define Roles # NOTE: Replace these role definitions as per your project's needs. RESEARCHER_ROLES = { 'cybersecurity_regulatory_researcher': "Cybersecurity  Regulatory Researcher", 'cyber_breach_researcher': "Cyber Breach Researcher", 'investment_analyst': "Investment Analyst", 'investment_trader': "Investment Trader", 'portfolio_manager': "Portfolio Manager" } # Define Prompts # NOTE: Customize these prompts to suit your project's specific requirements. LANG_PROMPT = "Please respond in {language}." RESEARCH_BASE_SYSTEM = """You are a {role}. Your primary goal is  to understand and analyze \ changes in cybersecurity regulations or breaches, identify  investment opportunities, and make informed \ decisions on financial products, aligning with the current  cybersecurity landscape.""" RESEARCH_TOPIC_SYSTEM = "You are a {role}, and your research  topic is \"{topic}\"." SEARCH_TOPIC_PROMPT = """Please provide up to 2 necessary  keywords related to your \ research topic on cybersecurity regulations or breaches that  require Google search. \ Your response must be in JSON format, for example:  ["cybersecurity regulations", "cyber breach analysis"].""" SUMMARIZE_SEARCH_PROMPT = """### Requirements 1. The keywords related to your research topic and the search results are shown in the "Reference Information" section. 2. Provide up to {decomposition_nums} queries related to your research topic based on the search results. 3. Please respond in JSON format as follows: ["query1",  "query2", "query3", ...]. ### Reference Information {search} """ DECOMPOSITION_PROMPT = """You are a {role}, and before delving  into a research topic, you break it down into several \ sub-questions. These sub-questions can be researched through online searches to gather objective opinions about the given \ topic. --- The topic is: {topic} --- Now, please break down the provided research topic into  {decomposition_nums} search questions. You should respond with  an array of \ strings in JSON format like ["question1", "question2", ...]. """ COLLECT_AND_RANKURLS_PROMPT = """### Reference Information 1. Research Topic: "{topic}" 2. Query: "{query}" 3. The online search results: {results} --- Please remove irrelevant search results that are not related to the query or research topic. Then, sort the remaining search  results \ based on link credibility. If two results have equal credibility, prioritize them based on relevance. Provide the  ranked \ results' indices in JSON format, like [0, 1, 3, 4, ...], without including other words. """ WEB_BROWSE_AND_SUMMARIZE_PROMPT = '''### Requirements 1. Utilize the text in the "Reference Information" section to respond to the question "{query}". 2. If the question cannot be directly answered using the text,  but the text is related to the research topic, please provide \ a comprehensive summary of the text. 3. If the text is entirely unrelated to the research topic,  please reply with a simple text "Not relevant." 4. Include all relevant factual information, numbers, statistics, etc., if available. ### Reference Information {content} ''' CONDUCT_RESEARCH_PROMPT = '''### Reference Information {content} ### Requirements Please provide a detailed research report on the topic:  "{topic}", focusing on investment opportunities arising \ from changes in cybersecurity regulations or breaches. The report must: - Identify and analyze investment opportunities in the US market. - Detail how and when to invest, the structure for the investment, and the implementation and exit strategies. - Adhere to APA style guidelines and include a minimum word count of 2,000. - Include all source URLs in APA format at the end of the report. ''' # Roles RESEARCHER_ROLES = { 'cybersecurity_regulatory_researcher': "Cybersecurity  Regulatory Researcher", 'cyber_breach_researcher': "Cyber Breach Researcher", 'investment_analyst': "Investment Analyst", 'investment_trader': "Investment Trader", 'portfolio_manager': "Portfolio Manager" } # The rest of the classes and functions remain unchangedImportant notesExecute the installation and setup commands in your terminal before running the Python scriptDon’t forget to replace placeholder texts in config files and the Python script with actual data or API keysEnsure that MetaGPT is properly installed and configured on your machineIn this high-stakes exploration, we dissect the exhilarating yet precarious world of LLM-integrated applications. We delve into how they’re transforming finance while posing emergent ethical dilemmas and security risks that simply cannot be ignored. Be prepared to journey through real-world case studies that highlight the good, the bad, and the downright ugly of LLM applications in finance, from market-beating hedge funds to costly security breaches and ethical pitfalls.Conclusion"In an era shaped by cyber landscapes, MetaGPT emerges as the guiding light for astute investors. Seamlessly blending cybersecurity insights with finance, it pioneers a data-driven approach, unveiling opportunities and risks often concealed within regulatory shifts and breaches. This model isn't just a tool; it's the compass navigating the ever-changing intersection of cybersecurity and finance, empowering investors to thrive in an intricate, high-stakes market."Author BioJames Bryant, a finance and technology expert, excels at identifying untapped opportunities and leveraging cutting-edge tools to optimize financial processes. With expertise in finance automation, risk management, investments, trading, and banking, he's known for staying ahead of trends and driving innovation in the financial industry. James has built corporate treasuries like Salesforce and transformed companies like Stanford Health Care through digital innovation. He is passionate about sharing his knowledge and empowering others to excel in finance. Outside of work, James enjoys skiing with his family in Lake Tahoe, running half marathons, and exploring new destinations and culinary experiences with his wife and daughter.Aloke Mukherjee is a seasoned technologist with over a decade of experience in business architecture, digital transformation, and solutions architecture. He excels at applying data-driven solutions to real-world problems and has proficiency in data analytics and planning. Aloke worked at EMC Corp and Genentech and currently spearheads the digital transformation of Finance Business Intelligence at Stanford Health Care. In addition to his work, Aloke is a Certified Personal Trainer and is passionate about helping his clients stay fit. Aloke also has a passion for wine and exploring new vineyards.
Read more
  • 0
  • 0
  • 15795

article-image-making-the-best-out-of-hugging-face-hub-using-langchain
Ms. Valentina Alto
17 Jun 2023
6 min read
Save for later

Making the best out of Hugging Face Hub using LangChain

Ms. Valentina Alto
17 Jun 2023
6 min read
Since the launch of ChatGPT in November 2022, everyone is talking about GPT models and OpenAI. There is no doubt that the Generative Pre-trained Transformers (GPT) architecture developed by OpenAI has demonstrated incredible results, also given the investments in training (almost 500 billion tokens) and complexity of the model (175 billion parameters for the GPT-3).Nevertheless, there is an incredible number of open-source Large Language Models (LLMs) that have been widespread in the last months. Below are some examples:Dolly: 12 billion parameters LLM developed by Databricks and trained on their ML platform. Source codeàhttps://github.com/databrickslabs/dollyStableML: This is a series of LLM developed by StabilityAI, the company behind the popular image generation model Stable Diffusion. The series encompasses a variety of LLMs, some of which are fine-tuned on specific use cases. Source codeàhttps://github.com/Stability-AI/StableLMFalcon LLM: A 40 billion parameters LLM developed by the Technology Innovation Institute and trained on a particularly high-quality dataset called RefinedWeb. Plus, as for now (June 2023) ranks 1 globally in the latest Hugging Face independent verification of open-source AI models. Source codeà https://huggingface.co/tiiuaeGPT NeoX and GPT-J: An open-source reproduction of the OpenAI’s GPT series developed by Eulether AI, with respectively 40 and 6 billion parameters. Source codeà https://huggingface.co/EleutherAI/gpt-neox-20b and https://huggingface.co/EleutherAI/gpt-j-6bOpenLLaMa: As for the previous class of models, also this one is an open-source reproduction of Meta AI’s LLaMA and has 3.7 billion parameters. Source codeàhttps://github.com/openlm-research/open_llamaIf you are interested in getting deeper into those models and their performance, you can reference the Hugging Face leaderboard here.Image1: Hugging Face Leaderboard Now, LLMs are great, yet to unlock their real power we need them to be positioned within an applicative logic. In other words, we want our LLMs to infuse intelligence within our applications.For this purpose, we will be using LangChain, a powerful lightweight SDK which makes it easier to integrate and orchestrate LLMs within applications. LangChain is one of the most popular LLMs orchestrators, yet if you want to explore further packages I encourage you to read about Semantic Kernel and Jarvis.One of the nice things about LangChain is its integration with external tools: those might be OpenAI (and other LLMs vendors), data sources, search APIs, and so on. In this article, we are going to explore how LangChain makes it easier to leverage open-source LLMs by leveraging its integration with the Hugging Face Hub.Welcome to the realm of open source LLMsThe Hugging Face Hub serves as a comprehensive platform comprising more than 120k models, 20kdatasets, and 50k demo apps (Spaces), all of which are openly accessible and shared as open-source projects. It provides an online environment where developers can effortlessly collaborate and collectively develop machine learning solutions. Thanks to LangChain, it is way easier to start interacting with open-source LLMs. Plus, you can also surround those models with all the libraries provided by LangChain in terms of prompt design, memory retention, chain management, and so on.Let’s see an implementation with Python. To reproduce the code, make sure to have: Python 3.7.1 or higheràyou can check your Python version running python --version in your terminalLangChain installedàyou can install it via pip install langchainThe huggingface_hub Python package installedà you can install it via pip install huggingface_butHugging Face Hub API keyàto get the API key, you can register into the portal here and then generate your secret key.For this example, I’m going to use the lightest version of Dolly, developed by Databricks and available in three sizes: 3, 7, and 12 billion parameters.from langchain import HuggingFaceHub from getpass import getpass HUGGINGFACEHUB_API_TOKEN = "your-api-key" import os os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN repo_id = " databricks/dolly-v2-3b" llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})As you can see from the above code, the only information we need are our Hugging Face Hub API key and the model’s repo ID; then, LangChain will take care of initializing our model thanks to the direct integration with Hugging Face Hub.Now that we have initialized our model, it is time to define the structure of the prompt:from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm)Finally, we can feed our model with a first question:question = "In the first movie of Harry Potter, what is the name of the three-headed dog? “ print(llm_chain.run(question)) Output: The name of the three-headed dog in Harry Potter and the Philosopher Stone is Fuffy.Even though I tested the light version of Dolly with “only” 3 billion parameters, it came with pretty accurate results. Of course, for more complex tasks or real-world projects, heavier models might be taken into consideration, like the one emerging as top performers in the Hugging Face leaderboard mentioned at the beginning of this article.ConclusionThe realm of open-source LLM is growing exponentially, and this creates a vibrant environment of experimentation and tuning from which anyone can benefit. Plus, some interesting trends are rising, like the reduction of the number of models’ parameters in favour of an increase in quality of the training dataset. In fact, we saw that the current top performer among the open source model is Falcon LLM, with “only” 40 billion parameters, which gained its strength from the high-quality training dataset. Finally, with the development of orchestration frameworks like LangChain and similar, it’s getting easier and easier to leverage open source LLMs and integrate them into our applications. Referenceshttps://huggingface.co/docs/hub/indexOpen LLM Leaderboard — a Hugging Face Space by HuggingFaceH4Hugging Face Hub — 🦜🔗 LangChain 0.0.189Overview (huggingface.co)stabilityai (Stability AI) (huggingface.co)Stability-AI/StableLM: StableLM: Stability AI Language Models (github.com)Author BioValentina Alto graduated in 2021 in data science. Since 2020, she has been working at Microsoft as an Azure solution specialist, and since 2022, she has been focusing on data and AI workloads within the manufacturing and pharmaceutical industries. She has been working closely with system integrators on customer projects to deploy cloud architecture with a focus on modern data platforms, data mesh frameworks, IoT and real-time analytics, Azure Machine Learning, Azure Cognitive Services (including Azure OpenAI Service), and Power BI for dashboarding. Since commencing her academic journey, she has been writing tech articles on statistics, machine learning, deep learning, and AI in various publications and has authored a book on the fundamentals of machine learning with Python.Author of the book: Modern Generative AI with ChatGPT and OpenAI ModelsLink - Medium  LinkedIn  
Read more
  • 0
  • 0
  • 15487

article-image-getting-started-with-azure-speech-service
M.T. White
22 Aug 2023
10 min read
Save for later

Getting Started with Azure Speech Service

M.T. White
22 Aug 2023
10 min read
IntroductionCommanding machines to your bidding was once sci-fi. Being able to command a machine to do something with mere words graced the pages of many sci-fi comics and novels.  It wasn’t until recently that science fiction became science fact.  With the rise of devices such as Amazon’s Alexa and Apple’s Siri, being able to vocally control a device has become a staple of the 21st century. So, how does one integrate voice control in an app?  There are many ways to accomplish that.  However, one of the easiest ways is to use an Azure AI tool called Speech Service.  This tutorial is going to be a crash course on how to integrate Azure’s Speech Service into a standard C# app.  To explore this AI tool, we’re going to use it to create a simple profanity filter to demonstrate the Speech Service. What is Azure Speech Service?There are many ways to create a speech-to-text app.  One could create one from scratch, use a library, or use a cloud service.  Arguably the easiest way to create a speech-to-text app is with a cloud service such as the Azure speech service.  This service is an Azure AI tool that will analyze speech that is picked up by a microphone and converts it to a text string in the cloud.  The resulting string will then be sent back to the app that made the request.  In other words, the Speech-to-Text service that Azure offers is an AI developer tool that allows engineers to quickly convert speech to a text string. It is important to understand the Speech Service is a developer’s tool.  Since the rise of systems like ChatGPT what is considered an AI tool has been ambiguous at best.  When one thinks of modern AI tools they think of tools where you can provide a prompt and get a response.  However, when a developer thinks of a tool, they usually think of a tool that can help them get a job done quickly and efficiently.  As such, the Azure Speech Service is an AI tool that can help developers integrate speech-to-text features into their applications with minimal setup. The Azuer Speech service is a very powerful tool that can be integrated into almost anything.  For example, you can create a profanity filter with minimal code, make a voice request to LLM like ChatGPT or do any number of things.  Now, it is important to remember that Azure Speech Service is an AI tool that is meant for engineers.  Unlike tools like ChatGPT or LLMs in general, you will have to understand the basics of code to use it successfully.  With that, what do you need to get started with the Speech Service?What do you need to build to use Azure Speech Service?Setting up an app that can utilize the Azure service is relatively minimal.  All you will need is the following:    An Azure account.    Visual Studios (preferably the latest version)    Internet connectivity    Microsoft.CognitiveServices.Speech Nuget packageThis project is going to be a console-based application, so you won’t need to worry about anything fancy like creating a Graphical User Interface (GUI). When all that is installed and ready to go the next thing you will want to do is set up a simple speech-to-text service in Azure. Setup Azure Speech ServiceAfter you have your environment set up, you’re going to want to set up your service.  Setting up the Speech-to-Text service is quick and easy as there is very little that needs to be done on the Azure side.  All one has to do is set the service up in perform the following steps,1.     Login into Azure and search for Speech Services.2.     Click the Create button in Figure 1 and fill out the wizard that appears:Figure 1. Create Button3.     Fill out the wizard to match Figure 2.  You can name the instance anything you want and set the resource group to anything you want.  As far as the pricing tier goes, you will usually be able to use the service for free for a time.  However, after the trial period ends you will eventually have to pay for the service.  Regardless, once you have the wizard filled out click Review + Create:Figure 2. Speech Service 4.     Keep following the wizard until you see the screen in Figure 3.  On this screen, you will want to click the manager key link that is circled in red:Figure 3.  Instance ServiceThis is where you get the keys necessary to use the AI tool.  Clicking the link is not totally necessary as the keys are at the bottom of the page.  However, clicking the link is sometimes easier as it’ll bring you directly to the keys. At this point, the service is set up.  You will need to capture the key info which can be viewed in Figure 4:Figure 4. Key InformationYou will need to capture the key data. You can do this by simply clicking the Show Keys button which will unmask KEY 1 and KEY 2.  Each instance you create will generate a new set of keys.  As a safety note, you should never share your keys with anyone as they’ll be able to use your service which in turn means they will rack up your bill among other cyber-security concerns.  As such, you will want to unmask the keys and grab KEY 1 and copy the region as well.  C# CodeNow, comes the fun part of the project, creating the app.  The app will be relatively simple.  The only hard part will be installing the NuGet package for the speech service.  To do this simply add the NuGet package found in Figure 5.Figure 5. NuGet PackageOnce that package is installed you can now start to implement the code. To start off, we’re simply going to make an app that can dictate back what we say to it.  To do this input the following code:// See https://aka.ms/new-console-template for more information using Microsoft.CognitiveServices.Speech; await translateSpeech(); static async Task translateSpeech() {    string key = "<Your Key>";    string region = "<Your Region";    var config = SpeechConfig.FromSubscription(key, region);    using (var recognizer = new SpeechRecognizer(config))    {        var result = await recognizer.RecognizeOnceAsync();        Console.WriteLine(result.Text);    } } }When you run this program it will open up a prompt.  You will be able to speak into the computer mic and whatever you say will be displayed.  For example, run the program and say “Hello World”.  After the service is finished translating your speech you should see the following display on the command prompt: Figure 6. Output From AppNow, this isn’t the full project.  This is just a simple app that will dictate what we say to the computer.  What we’re aiming for in this tutorial is a simple profanity filter.  For that, we need to add another function to the project to help filter the returned string. It is important to remember that what is returned is a text string.  The text string is just like any other text string that one would use in C#.  As such, we can modify the program to the following to filter profanity:// See https://aka.ms/new-console-template for more information using Microsoft.CognitiveServices.Speech; await translateSpeech(); static async Task translateSpeech() {    string key = "<Your Key>";    string region = "<Your Region>";    var config = SpeechConfig.FromSubscription(key, region);    using (var recognizer = new SpeechRecognizer(config))    {        var result = await recognizer.RecognizeOnceAsync();        Console.WriteLine(result.Text);        VetSpeech(result.Text);    } } static void VetSpeech(String input) {    Console.WriteLine("checking phrase: " + input);    String[] badWords = { "Crap", "crap", "Dang", "dang", "Shoot", "shoot" };    foreach(String word in badWords)    {        if (input.Contains(word))        {            Console.WriteLine("flagged");        }    }   }Now, in the VetSpeech function, we have an array of “bad” words.  In short, if the returned string contains a variation of these words the program will display “flagged”.  As such, if we were to say “Crap Computer” when the program is run we can expect to see the following output in the prompt:Figure 7. Profanity OutputAs can be seen, the program flagged the phrase because the word Crap was in it. ExercisesThis tutorial was a basic rundown of the Speech Service in Azure.  This is probably one of the simplest services to use but it is still very powerful.  Now, that you have a basic idea of how the service works and how to write C# code for it.  Create a ChatGPT developer token and take the returned string and pass it to ChatGPT.  When done correctly, this project will allow you to verbally interact with ChatGPT.  That is you should be able to verbally ask ChatGPT a question and get a response.ConclusionThe Azure Speech Service is an AI tool.  Unlike many other AI tools like ChatGPT and the like, this tool is meant for developers to build applications with.  Also, unlike many other Azure services, this is a very easy-to-use system with a minimal setup.  As can be seen from the tutorial the hardest part was writing the code that utilized the service, and even still that was not that difficult.  The best part is that the code provided in this tutorial is the basic code you will need to interact with the service meaning that all you have to do now, is modify it to fit your project’s needs. Overall, the power of the Speech Service is limited to your imagination.  This tool would be excellent for integrating verbal interaction with other tools like ChatGPT, creating voice-controlled robots, or anything else.  Overall, this is a relatively cheap and powerful tool that can be leveraged for many things.Author BioM.T. White has been programming since the age of 12. His fascination with robotics flourished when he was a child programming microcontrollers such as Arduino. M.T. currently holds an undergraduate degree in mathematics, and a master's degree in software engineering, and is currently working on an MBA in IT project management. M.T. is currently working as a software developer for a major US defense contractor and is an adjunct CIS instructor at ECPI University. His background mostly stems from the automation industry where he programmed PLCs and HMIs for many different types of applications. M.T. has programmed many different brands of PLCs over the years and has developed HMIs using many different tools.Author of the book: Mastering PLC Programming 
Read more
  • 0
  • 0
  • 15152

article-image-getting-started-with-google-makersuite
Anubhav Singh
08 Aug 2023
14 min read
Save for later

Getting Started with Google MakerSuite

Anubhav Singh
08 Aug 2023
14 min read
MakerSuite, essentially a developer tool, enables everyone with a Google Account to access the power of PaLM API with a focus on building products and services using it. The MakerSuite interface allows rapid prototyping and testing of the configurations that are used while interacting with the PaLM API. Once the user is satisfied with the configurations, they can very easily port them to their backend codebases. We’re now ready to dive into exploring the MakerSuite interface. To get started, head over to https://makersuite.google.com/ on your browser. Make sure you’re logged in to your Google Account to be able to access the interface. You’ll be able to see the welcome dashboard.The available options on MakerSuite as of the date of writing this article are - Text Prompts, Data Prompts, and Chat Prompts. Let’s take a brief look at what each of these does.Text PromptsText prompts are the most basic and customizable form of prompts that can be provided to the models. You can choose to set it to any task or ask any question in a stateless manner. The user prompt and input are ingested by the model every time it is run and the model itself does not hold any context. Thus, text prompts are a great starting point and can be made as deterministic or creative in their output as required by the user.Let us create a Text prompt in MakerSuite. Click on the Create button on the Text prompt card and you’ll be presented with the prompt testing UI. On the top, MakerSuite allows users to save their prompts by name. It also provides starter samples which allow one to quickly test and understand how the product works. Below that, is the main working area where the users can define their own prompts and by adjusting the configuration parameters of the model at the bottom, run the prompts to produce an output.First, Click on the Pencil icon on the top left to give this prompt a suitable name. For our example, we’ll be building a prompt that asks the model to produce the etymology of any given word. We’re using the following valuesfield     valuename     Word Etymologydescription     Asking PaLM API to provide word etymologies.Click on “Save” to save these values and close the input modal. Kindly note that these values do not affect the model in any manner and are simply present for user convenience.Now, in the main working area below, we’ll write the required prompt. For our example, we write the prompt given below:For any given word that follows, provide its etymology in no more than 300 words.Aeroplane.Etymology: Now, let’s adjust the model parameters. Click on the button next to the Run button to change the model settings. For our example, we shall set the following values to the parameters: field    value       remarkmodel    Text Bison       Use defaultTemperature    0       Since word etymologies are dry facts and are not expected to be creativeAdd stop sequence        Use defaultMax outputs    1       Word etymologies are usually not going to benefit from variations of telling themDepending on the use case you’re building your generative AI-backed software for, you may wish to change the Safety settings of the model response. To do so, click on the Edit safety settings button. You can see the following options and can change them as per your requirement. For our use case, we shall leave it to default.At the bottom of the configuration menu, you can choose to adjust further advanced settings of the model output. Let’s see what these options are: We shall leave these options on default for now.Great, we’re now all set to run the prompt. Click on the Run button on the bottom and wait for the model to produce the output. In our case, the model outputs:The word "aeroplane" is derived from the Greek words "aēr" (air) and "planē" (to wander). The term was first used in the 1860s to describe a type of flying machine that was powered by a steam engine. In 1903, the Wright brothers made the first successful flight of a powered aeroplane.Note that, for you, the response might come out slightly different due to the inherently non-deterministic nature of how generative AI works. At this point, you might want to experiment by erasing the model output and running the prompt again. Does the output change? Re-run it several times to observe changes in the model output. Then, try adjusting the values of the model configuration and see how that affects the output of the model. If you had set the temperature configuration to 0, you will notice that the model likely produces the same output many times. Try increasing it to 1 and then run the model a few times. Does the output generated in each iteration remain the same now? It is highly possible that you’ll observe that the model output changes every time you re-run the prompt.It is interesting to note here that the prompt you provide to the model does not contain any examples of how the model should respond. This method of using the model is called Zero-shot learning in which the trained model is asked to produce predictions for an input that it may not have seen before. In our example, it is the task of providing word etymologies, which the model may or may not have been trained on.This makes us wonder if we gave the model an input that it has absolutely not trained on, is it likely to produce the correct response? Let us try this out. Change the word in our etymology prompt example to “xakoozifictation”. Hit the Run button to see what the model outputs. Instead of telling us that the word does not exist and thus, has no meaning, the model attempts to produce an etymology of the word. The output we got was: Instead of telling us that the word does not exist and thus, has no meaning, the model attempts to produce an etymology of the word. The output we got was: Xakoozifictation is a portmanteau of the words "xakooz" and "ification". Xakooz is a nonsense word created by combining the sounds of the words "chaos" and "ooze". ification is a suffix that can be added to verbs to create nouns that describe the process of doing something. In this case, xakoozifictation means the process of making something chaotic or oozy.What we observe here is called “model hallucination” - a phenomenon common among large language models wherein the model tries to produce output contrary to common logic or is inaccurate in real-world knowledge. It is highly recommended here to read more about Model Hallucations in the “Challenges in working with LLMs” section.Let us continue our discussion about Zero shot learning. We saw that when we provide only a prompt to the model and no examples of how to produce responses, the model tries its best to produce a response and in most general cases it succeeds. However, if we were to provide some examples to the model of the expected input-output pairs, can we program the model to perform more accurately and do away with the model hallucinations? Let us give this a try by providing some input-output examples of the model. Update your model prompt to the following:For any given word that follows, provide its etymology in no more than 300 words.Examples: Word: aeroplaneReasoning: Since it's a valid English word, produce an output.Etymology: Aeroplane is a compound word formed from the Greek roots "aer" (air) and "planus" (flat). Word: balloonReasoning: Since it's a valid English word, produce an output.Etymology: The word balloon comes from the Italian word pallone, which means ball. The Italian word is derived from the Latin word ballare, which means to dance. Word: oungopoloctousReasoning: Since this is not a valid English word, do not produce an etymology and say it's "Not available".Etymology: Not availableWord: kaploxicatingReasoning: Since this is not a valid English word, do not produce an etymology and say it's "Not available".Etymology: Not availableWord: xakoozifictationEtymology: In the above prompt, we have provided 2 examples of words that exist and 2 examples of words that do not exist. We expect the model to learn from these examples and produce output accordingly. Hit Run to see the output of the model, remember to set the temperature configuration of the model back to 0.You will see that the model responds with the “Not available” output for non-existent words now and with etymologies only for words that exist in the English dictionary. Hence, by providing a few examples of how we expect the model to behave, we were able to stop the model hallucination problem.This method of providing some samples of the expected input-output to the model in the prompt is called Few shot learning. In Few shot learning, the model is expected to predict output on unknown input based on a few similar samples it has received prior to the task of prediction. In special cases, the number of samples might be exactly one, which gets termed as “One-shot learning”.Now, let us explore the next type of prompt available on the MakerSuite - Data Prompt.Data PromptsIn Data prompts, the user is expected to use the model to generate more samples of data based on provided samples. The MakerSuite data prompt interface defines two sections of the prompt - the prompt itself which is now options and the samples of the data that the prompt has to work on, which is a required section.It is important to note here that at the bottom of the page, the model is still the Text Bison model. Thus, the Data prompts can be understood as specific use cases of the text generation using the Text Bison model.Further, there is no way to test the data prompts without specifying the inputs as one or more columns of the to-be-generated rows of the dataset. Let us build a prompt for this interface. Since providing a prompt text is now not necessary, we’ll skip it and instead fill the table as shown below: In order to add more columns than the number of columns present by default, use the Add button on the top right.Once this is done, we are now ready to provide the input column for the test inputs below. In the Test your prompt section at the bottom, fill in only the INPUT number column as shown below:Now, click on the Run button to see how the model produces outputs for this prompt. We see that the model produces the rest of the data for those rows correctly and using the format that we provided it with. This makes us wonder that if we provide historical data to the Data prompt, will it be able to predict future trends? Let us give this a try.Create a new Data prompt and on the data examples table, on the top right click on Add -> Import examples. You may choose any existing Google Sheets from the dialog box, or upload any supported file. We choose to upload a CSV file, notably the Iris flower dataset’s CSV. We use the one found at https://gist.github.com/netj/8836201/On selecting the file, the interface will ask you to assign the columns in the CSV to columns in your data examples. We choose to create new input columns for all the feature columns of the Iris dataset, and keep the labels column as an output column, as shown below:After importing the examples, let us manually move a few examples to the Test your prompt section. Remember to remove these examples from the data examples section above to ensure the model is not training on the same data that it is being tested on. Now, click the Run button to get the model’s output.We observe that the model is able to correctly output the label column values as per the examples it has received. Hence, besides generating more examples for a given dataset, the model is also capable of making predictions about the inputs to a degree. One would require a much more extensive testing of the same to determine the accuracy of the model, which is beyond the scope of this article.Finally, let us explore the Chat prompts.Chat promptsChatting with generative AI models is a form in which most people have interacted with them first. Made popular once more by the advent of ChatGPT, the concept of AI being able to have intelligent conversations has been around for a very long time and has been a part of popular culture on a regular basis. One of the most well-known examples of an AI being able to take instructions and produce output accordingly is the JARVIS from Iron Man series of comics. With the latest possibilities enabled by generative AI, building such systems is very much a realistic task, with efforts already underway.In this section, we shall see how we can have conversations with generative AI models that mimic human-like understanding and decision making skills.First, click on the Create New button on the top left of the MakerSuite interface and select the Chat prompt. You will be able to see the blank interface for designing a Chat prompt, as shown below: One immediate change to notice is that there is no longer a Run button at the bottom of the UI. This has been shifted to the Test your prompt section with a chat box-like interface, and the message send button for the chat box functions as the Run button.On the left the interface section reads “Write your prompt examples”, we’ll call this the prompts examples section. Also take note of the Context field available in this section, which can be used to set the rules of the interaction and the format in which the output is expected. Now, let us design a chat prompt, as shown in the table below:field    valueContext    You're a banker at the Gringotts bank, set in the Wizarding world of Harry Potter.User    I wish to access my accountModel    Very well, please present your key.User    How safe are the vaults at Gringotts?Model   Gringotts' vaults are considered extremely safe. Protected by complex magic, various creatures, and intricate security measures, these vaults are nearly impenetrable. Unauthorized access is extraordinarily challenging and dangerous, as demonstrated multiple times in the series. This reputation contributes to the bank's trustworthiness among wizards.We expect that the model will pretend to be a banker at the Gringotts bank, which is referenced from the popular book series Harry Potter. Since it's a fictional world and we expect the conversation to be similarly not bound to the real world, we should increase the model temperature, allowing it to be more creative. For this example, let’s set the model temperature to 0.7.Let us try having a conversation with the model. This is the conversation that happens with the model, in our case:We observe that although we have not provided the model with an example of how to respond when the user says they do not have the key, it correctly handles the response based on its existing knowledge about Gringotts Bank’s policies. Now that we have covered the different types of prompts available in MakerSuite, let’s explore how we can use them via code, making direct calls to the PaLM API.Author BioAnubhav Singh, Co-founder of Dynopii & Google Dev Expert in Google Cloud, is a  seasoned developer since the pre-Bootstrap era, Anubhav has extensive experience as a freelancer and AI startup founder. He authored "Hands-on Python Deep Learning for Web" and "Mobile Deep Learning with TensorFlow Lite, ML Kit, and Flutter." A Google Developer Expert in GCP, he co-organizes for TFUG Kolkata community and formerly led the team at GDG Cloud Kolkata. Anubhav is often found discussing System Architecture, Machine Learning, and Web technologies 
Read more
  • 0
  • 0
  • 14877
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-explainable-ai-development-and-deployment
Swagata Ashwani
01 Nov 2023
6 min read
Save for later

Explainable AI Development and Deployment

Swagata Ashwani
01 Nov 2023
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionGenerative AI is a subset of artificial intelligence that trains models to generate new data similar to some existing data. Examples - are image generation - creating realistic images that do not exist, Text generation - generating human-like text based on a given prompt, and music composition- creating new music compositions based on existing styles and genres.LLM - Large Language models - are a type of AI model specialized in processing and generating human language - They are trained on vast amounts of text data, which makes them capable of understanding context, semantics, and language nuances. Example- GPT3 from OPENAI.LLMs automate routine language processing tasks - freeing up human resources for more strategic work.Black Box DilemmaComplex ML models, like deep neural networks, are often termed as "black boxes" due to their opaque nature. While they can process vast amounts of data and provide accurate predictions, understanding how they arrived at a particular decision is challenging.Transparency in ML models is crucial for building trust, verifying results, and ensuring that the model is working as intended. It's also necessary for debugging and improving models.Model Explainability LandscapeModel explainability refers to the degree to which a human can understand the decisions made by a machine learning model. It's about making the model’s decisions interpretable to humans, which is crucial for trust and actionable insights. There are two types of explainability models -Intrinsic explainability refers to models that are naturally interpretable due to their simplicity and transparency. They provide insight into their decision-making process as part of their inherent design.Examples: Decision Trees, Linear Regression, Logistic Regression.Pros and Cons: Highlight that while they are easy to understand, they may lack the predictive power of more complex models.Post-hoc explainability methods are applied after a model has been trained. They aim to explain the decisions of complex, black-box models by approximating their behavior or inspecting their structure.Examples: LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), and Integrated Gradients.Pros and Cons: Post-hoc methods allow for the interpretation of complex models but the explanations provided may not always be perfect or may require additional computational resources.SHAP (SHapley Additive exPlanations)Concept:Main Idea: SHAP values provide a measure of the impact of each feature on the prediction for a particular instance.Foundation: Based on Shapley values from cooperative game theory.Working Mechanism:Shapley Value Calculation:For a given instance, consider all possible subsets of features.For each subset, compare the model's prediction with and without a particular feature.Average these differences across all subsets to compute the Shapley value for that feature.SHAP Value Interpretation:Positive SHAP values indicate a feature pushing the prediction higher, while negative values indicate the opposite.The magnitude of the SHAP value indicates the strength of the effect.LIME (Local Interpretable Model-agnostic Explanations)Concept:Main Idea: LIME aims to explain the predictions of machine learning models by approximating the model locally around the prediction point.Model-Agnostic: It can be used with any machine learning model.Working Mechanism:Selection of Data Point: Select a data point that you want to explain.Perturbation: Create a dataset of perturbed instances by randomly changing the values of features of the original data point.Model Prediction: Obtain predictions for these perturbed instances using the original model.Weight Assignment: Assign weights to the perturbed instances based on their proximity to the original data point.Local Model Training: Train a simpler, interpretable model (like linear regression or decision tree) on the perturbed dataset, using the weights from step 4.Explanation Extraction: Extract explanations from the simpler model, which now serves as a local surrogate of the original complex model.Hands-on exampleIn the below code snippet, we are using a popular churn prediction dataset to create a Random Forest Model.# Part 1 - Data Preprocessing # Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd # Importing the dataset dataset = pd.read_csv('Churn_Modelling.csv') X = dataset.iloc[:, 3:13] y = dataset.iloc[:, 13] dataset.head()#Create dummy variables geography=pd.get_dummies(X["Geography"],drop_first=True) gender=pd.get_dummies(X['Gender'],drop_first=True)## Concatenate the Data Frames X=pd.concat([X,geography,gender],axis=1) ## Drop Unnecessary columns X=X.drop(['Geography','Gender'],axis=1)Now, we save the model pickle file and use the lime and shap libraries for explainability.import pickle pickle.dump(classifier, open("classifier.pkl", 'wb')) pip install lime pip install shap import lime from lime import lime_tabular interpretor = lime_tabular.LimeTabularExplainer( training_data=np.array(X_train), feature_names=X_train.columns, mode='classification')Lime has a Lime Tabular module to set the explainability module for tabular data. We pass in the training dataset, and the mode of the model as classification here.exp = interpretor.explain_instance( data_row=X_test.iloc[5], ##new data predict_fn=classifier.predict_proba) exp.show_in_notebook(show_table=True) We can see from the above chart, that Lime is able to explain one particular prediction from X_test in detail. The prediction here is 1 - (Churn is True), and the features that are contributing positively are represented in orange, and negatively are shown in blue.import shap shap.initjs() explainer = shap.Explainer(classifier) shap_values = explainer.shap_values(X_test) shap.summary_plot(shap_values, X_test)In the above code snippet, we have created a plot for explainability using the shap library. The Shap library here gives a global explanation for the entire test dataset, compared to LIME which focuses on local interpretation.From the below graph, we can see which features contribute to how much for each of the churn classes.ConclusionExplainability in AI enables trust in AI systems, and enables us to dive deeper in understanding the reasoning behind the models, and make appropriate updates to models in case there are any biases. In this article, we used libraries such as SHAP and LIME that make explainability easier to design and implement.Author BioSwagata Ashwani serves as a Principal Data Scientist at Boomi, where she leads the charge in deploying cutting-edge AI solutions, with a particular emphasis on Natural Language Processing (NLP). With a stellar track record in AI research, she is always on the lookout for the next state-of-the-art tool or technique to revolutionize the industry. Beyond her technical expertise, Swagata is a fervent advocate for women in tech. She believes in giving back to the community, regularly contributing to open-source initiatives that drive the democratization of technology.Swagata's passion isn't limited to the world of AI; she is a nature enthusiast, often wandering beaches and indulging in the serenity they offer. With a cup of coffee in hand, she finds joy in the rhythm of dance and the tranquility of the great outdoors.
Read more
  • 0
  • 0
  • 14877

article-image-build-your-first-rag-with-qdrant
Louis Owen
12 Oct 2023
10 min read
Save for later

Build your First RAG with Qdrant

Louis Owen
12 Oct 2023
10 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionLarge Language Models (LLM) have emerged as powerful tools for various tasks, including question-answering. However, as many are now aware, LLMs alone may not be suitable for the task of question-answering, primarily due to their limited access to up-to-date information, often resulting in incorrect or hallucinated responses. To overcome this limitation, one approach involves providing these LMs with verified facts and data. In this article, we'll explore a solution to this challenge and delve into the scalability aspect of improving question-answering using Qdrant, a vector similarity search engine and vector database.To address the limitations of LLMs, one approach is to provide known facts alongside queries. By doing so, LLMs can utilize the actual, verifiable information and generate more accurate responses. One of the latest breakthroughs in this field is the RAG model, a tripartite approach that seamlessly combines Retrieval, Augmentation, and Generation to enhance the quality and relevance of responses generated by AI systems.At the core of the RAG model lies the retrieval step. This initial phase involves the model searching external sources to gather relevant information. These sources can span a wide spectrum, encompassing databases, knowledge bases, sets of documents, or even search engine results. The primary objective here is to find valuable snippets or passages of text that contain information related to the given input or prompt.The retrieval process is a vital foundation upon which RAG's capabilities are built. It allows the model to extend its knowledge beyond what is hardcoded or pre-trained, tapping into a vast reservoir of real-time or context-specific information. By accessing external sources, the model ensures that it remains up-to-date and informed, a critical aspect in a world where information changes rapidly.Once the retrieval step is complete, the RAG model takes a critical leap forward by moving to the augmentation phase. During this step, the retrieved information is seamlessly integrated with the original input or prompt. This fusion of external knowledge with the initial context enriches the pool of information available to the model for generating responses.Augmentation plays a pivotal role in enhancing the quality and depth of the generated responses. By incorporating external knowledge, the model becomes capable of providing more informed and accurate answers. This augmentation also aids in making the model's responses more contextually appropriate and relevant, as it now possesses a broader understanding of the topic at hand.The final step in the RAG model's process is the generation phase. Armed with both the retrieved external information and the original input, the model sets out to craft a response that is not only accurate but also contextually rich. This last step ensures that the model can produce responses that are deeply rooted in the information it has acquired.By drawing on this additional context, the model can generate responses that are more contextually appropriate and relevant. This is a significant departure from traditional AI models that rely solely on pre-trained data and fixed knowledge. The generation phase of RAG represents a crucial advance in AI capabilities, resulting in more informed and human-like responses.To summarize, RAG can be utilized for the question-answering task by following the multi-step pipeline that starts with a set of documentation. These documents are converted into embeddings, essentially numerical representations, and then subjected to similarity search when a query is presented. The top N most similar document embeddings are retrieved, and the corresponding documents are selected. These documents, along with the query, are then passed to the LLM, which generates a comprehensive answer.This approach improves the quality of question-answering but depends on two crucial variables: the quality of embeddings and the quality of the LLM itself. In this article, our focus will be on the former - enhancing the scalability of the embedding search process, with Qdrant.Qdrant, pronounced "quadrant," is a vector similarity search engine and vector database designed to address these challenges. It provides a production-ready service with a user-friendly API for storing, searching, and managing vectors. However, what sets Qdrant apart is its enhanced filtering support, making it a versatile tool for neural-network or semantic-based matching, faceted search, and various other applications. It is built using Rust, a programming language known for its speed and reliability even under high loads, making it an ideal choice for demanding applications. The benchmarks speak for themselves, showcasing Qdrant's impressive performance.In the quest for improving the accuracy and scalability of question-answering systems, Qdrant stands out as a valuable ally. Its capabilities in vector similarity search, coupled with the power of Rust, make it a formidable tool for any application that demands efficient and accurate search operations. Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn how to build your first RAG with Qdrant!Setting Up QdrantTo get started with Qdrant, you have several installation options, each tailored to different preferences and use cases. In this guide, we'll explore the various installation methods, including Docker, building from source, the Python client, and deploying on Kubernetes.Docker InstallationDocker is known for its simplicity and ease of use when it comes to deploying software, and Qdrant is no exception. Here's how you can get Qdrant up and running using Docker:1. First, ensure that the Docker daemon is installed and running on your system. You can verify this with the following command:sudo docker infoIf the Docker daemon is not listed, start it to proceed. On Linux, running Docker commands typically requires sudo privileges. To run Docker commands without sudo, you can create a Docker group and add your users to it.2. Pull the Qdrant Docker image from DockerHub:docker pull qdrant/qdrant3. Run the container, exposing port 6333 and specifying a directory for data storage:docker run -p 6333:6333 -v $(pwd)/path/to/data:/qdrant/storage qdrant/qdrantBuilding from SourceBuilding Qdrant from source is an option if you have specific requirements or prefer not to use Docker. Here's how to build Qdrant using Cargo, the Rust package manager:Before compiling, make sure you have the necessary libraries and the Rust toolchain installed. The current list of required libraries can be found in the Dockerfile.Build Qdrant with Cargo:cargo build --release --bin qdrantAfter a successful build, you can find the binary at ./target/release/qdrant.Python ClientIn addition to the Qdrant service itself, there is a Python client that provides additional features compared to clients generated directly from OpenAPI. To install the Python client, you can use pip:pip install qdrant-clientThis client allows you to interact with Qdrant from your Python applications, enabling seamless integration and control.Kubernetes DeploymentIf you prefer to run Qdrant in a Kubernetes cluster, you can utilize a ready-made Helm Chart. Here's how you can deploy Qdrant using Helm:helm repo add qdrant https://qdrant.to/helm helm install qdrant-release qdrant/qdrantBuilding RAG with Qdrant and LangChainQdrant works seamlessly with LangChain, in fact, you can use Qdrant directly in LangChain through the `VectorDBQA` class! The first thing we need to do is to gather all documents that we want to use as the source of truth for our LLM. Let’s say we store it in the list variable named `docs`. This `docs` variable is a list of string where each element of the list consist of chunks of paragraphs.The next thing that we need to do is to generate the embeddings from the docs. For the sake of an example, we’ll use a small model provided by the `sentence-transformers` package.from langchain.vectorstores import Qdrant from langchain.embeddings import HuggingFaceEmbeddings embedding_model = HuggingFaceEmbeddings(model_name=”sentence-transformers/all-mpnet-base-v2”) qdrant_vec_store = Quadrant.from_texts(docs, embedding_model, host = QDRANT_HOST, api_key = QDRANT_API_KEYOnce we setup the embedding model and Qdrant, we can now move to the next part of RAG, which is augmentation and generation. To do that, we’ll utilize the `VectorDBQA` class. This class basically will load some docs from Qdrant and then pass them into the LLM. Once the docs are passed or augmented, the LLM then will do its job to analyze them to generate the answer to the given query. In this example, we’ll use the GPT3.5-turbo provided by OpenAI.from langchain import OpenAI, VectorDBQA llm = OpenAI(openai_api_key=OPENAI_API_KEY) rag =   VectorDBQA.from_chain_type(                                    llm=llm,                                    chain_type=”stuff”,                                    vectorstore=qdrant_vec_store,                                    return_source_documents=False)The final thing to do is to test the pipeline by passing a query to the `rag` variable and LangChain supported by Qdrant will handle the rest!rag.run(question)Below are some examples of the answers generated by the LLM based on the provided documents using the Natural Questions datasets.ConclusionCongratulations on keeping up to this point! Throughout this article, you have learned what is RAG,  how it can improve the quality of your question-answering model, how to scale the embedding search part of the pipeline with Qdrant, and how to build your first RAG with Qdrant and LangChain. Hope the best for your experiment in creating your first RAG and see you in the next article!Author BioLouis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects.Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.
Read more
  • 0
  • 0
  • 14796

article-image-getting-started-with-langchain
Sangita Mahala
27 Sep 2023
7 min read
Save for later

Getting Started with LangChain

Sangita Mahala
27 Sep 2023
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights and books. Don't miss out – sign up today!IntroductionLangChain was launched in October 2022 as an open-source project by Harrison Chase. It is a Python framework that makes it easy to work with large language models (LLMs) such as the OpenAI GPT-3 language model. LangChain provides an easy-to-use API that makes it simple to interact with LLMs. You can use the API to generate text, translate languages, and answer the questions.Why to use LangChain?As we know, LangChain is a powerful tool that can be used to build a wide variety of applications and improve the productivity and quality of tasks. There are many reasons to use LangChain , including :Simplicity: LangChain provides a simple and easy interface for interacting with GPT-3. You don't need to worry about the details of the OpenAI API.Flexibility: LangChain allows you to customize the way you interact with GPT-3. You can use LangChain to build your own custom applications.Reduced costs: LangChain can help you to reduce costs by eliminating the need to hire human experts to perform LLM-related tasks.Increased productivity: LangChain can help you to increase your productivity by making it easy to generate high-quality text, translate languages, write creative content, and answer questions in an informative way.Getting Started with LangChain LLMIn order to completely understand LangChain and how to apply it in a practical use-case situation. Firstly, you have to set up the development environment.InstallationTo get started with LangChain, you have to:Step-1: Install the LangChain Python library:pip install langchain Step-2: Install the the openai package:pip install openaiStep-3: Obtain an OpenAI API key:In order to be able to use OpenAI’s models through LangChain you need to fetch an API key from OpenAI as well. So you have to follow these steps:Go to the OpenAI website by clicking this link: https://platform.openai.com/ Go to the top right corner of your screen and then click on the “Sign up” or “Sign in” if you already have an account. After signing in, you’ll be directed to the OpenAI Dashboard.Now navigate to the right corner of your OpenAI dashboard and click on the Personal button and then click on the “View API keys” section. Once you click “View API keys”, you will be redirected to the API keys section page. Then click on “+ Create new secret key”.Now provide a name for creating a secret key. For example : LangChain Once you click the create secret key button you will redirected to the secret key prompt then copy the API key and click done.The API key should look like a long alphanumeric string (for example: “sk-12345abcdeABCDEfghijKLMNZC”).Note- Please save this secret key safe and accessible. For security reasons, you won’t be able to view it again through your OpenAI account. If you lose this secret key, you’ll need to generate a new one.Step-4After getting the API key, you should execute the following command to add it as an environment variable: export OPENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:from langchain.llms import OpenAI llm = OpenAI(openai_api_key="...")For Example:Here are some of the best hands-on examples of LangChain applications:Content generationLangChain can also be used to generate text content, such as blog posts, marketing materials, and code. This can help businesses to save time and produce high-quality content.Output:Oh, feathered friend, so free and light, You dance across the azure sky, A symphony of colors bright, A song of joy that never dies. Your wings outstretched, you soar above, A glimpse of heaven from on high, Your spirit wild, your spirit love, A symbol of the endless sky.Translating LanguagesLangChain can also be used to translate languages accurately and efficiently. This can make it easier for people to interact with people around the world and for businesses to function in different nations.Example:Output:Question answeringLangChain can also be used to build question answering systems that can provide comprehensive and informative answers to users' questions. Question answering can be used for educational, research, and customer support tools.Example:Output:Check out LangChain’s official documentation to explore various toolkits available and to get access to their free guides and example use cases.How LangChain can be used to build the future of AIThere are several ways that LangChain can be utilized to build the AI of the future.Creating LLMs that are more effective and accurate By giving LLMs access to more information and resources, LangChain can help them perform better. LangChain, for example, can be used to link LLMs to knowledge databases or to other LLMs. LLMs can provide us with a better understanding of the world as a result, and their replies may be more accurate and insightful.Making LLMs more accessibleRegardless of a user's level of technical proficiency, LangChain makes using LLMs simpler. This may provide more equitable access to LLMs and enable individuals to use them to develop new, cutting-edge applications. For example, LangChain may be used to create web-based or mobile applications that enable users to communicate with LLMs without writing any code.Developing a new LLM applicationIt is simple with LangChain due to its chatbot, content generator, and translation systems. This could accelerate the deployment of LLMs across several businesses. For example, LangChain may be utilized for building chatbots that can assist doctors in illness diagnosis or to generate content-generating systems that can assist companies in developing personalized marketing materials.ConclusionIn this article, we've explored LangChain's main capabilities, given some interesting examples of its uses, and provided a step-by-step guide to help you start your AI adventure. LangChain is not just a tool; it's a gateway to the future of AI.  The adoption of LLMs in a variety of industries is accelerated by making it simpler to design and deploy LLM-powered applications.It will provide lots of advantages, such as higher production, enhanced quality, lower prices, simplicity of use, and flexibility. The ability of LangChain, as an entire system, to revolutionize how we interface with computers makes it a tremendous instrument. It assists in the development of the AI of the future by making it simpler to create and deploy LLM-powered applications. Now, it's your turn to unlock the full potential of AI with LangChain. The future is waiting for you, and it starts with you.Author BioSangita Mahala is a passionate IT professional with an outstanding track record, having an impressive array of certifications, including 12x Microsoft, 11x GCP, 2x Oracle, and LinkedIn Marketing Insider Certified. She is a Google Crowdsource Influencer and IBM champion learner gold. She also possesses extensive experience as a technical content writer and accomplished book blogger. She is always Committed to staying with emerging trends and technologies in the IT sector.
Read more
  • 0
  • 0
  • 14444

article-image-generative-recolor-with-adobe-firefly
Joseph Labrecque
23 Aug 2023
10 min read
Save for later

Generative Recolor with Adobe Firefly

Joseph Labrecque
23 Aug 2023
10 min read
Adobe Firefly OverviewAdobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ:Image 1: Adobe FireflyFor more information about the usage of Firefly to generate images, text effects, and more… have a look at the previous articles in this series:     Animating Adobe Firefly Content with Adobe Animate       Exploring Text to Image with Adobe Firefly      Generating Text Effects with Adobe Firefly       Adobe Firefly Feature Deep Dive       Generative Fill with Adobe Firefly (Part I)      Generative Fill with Adobe Firefly (Part II)This current Firefly article will focus on a unique use of AI prompts via the Generative recolor module.Generative Recolor and SVGWhile most procedures in Firefly are focused on generating imagery through text prompts, the service also includes modules that use prompt-driven AI a bit differently. The subject of this article, Generative recolor, is a perfect example of this.Generative recolor works with vector artwork in the form of SVG files. If you are unfamiliar with SVG, it stands for Scalable Vector Graphics and is an XML… so uses text-based nodes similar to HTML:Image 2: An SVG file is composed of vector information defining points, paths, and colorsAs the name indicates, we are dealing with vector graphics here and not photographic pixel-based bitmap images. Vectors are often used for artwork, logos, and such – as they can be infinitely scaled and easily recolored.One of the best ways of generating SVG files is by designing them in a vector-based design tool like Adobe Illustrator. Once you have finished designing your artwork, you’ll save it as SVG for use in Firefly:Image 3: Cat artwork designed in Adobe IllustratorTo convert your Illustrator artwork to SVG, perform the following steps:1.     Choose File > Save As to open the save as dialog.2.     Choose SVG (svg) for the file format:Image 3: Selecting SVG (svg) as the file format3.     Browse to the location on your computer you would like to save the file to.4.     Click the Save button.You now have an SVG file ready to recolor within Firefly. If you desire, you can download the provided cat.svg file that we will work on in this article. Recolor Vector Artwork with Generative RecolorGenerative recolor, like all Firefly modules, can be found directly at https://firefly.adobe.com/ so long as you are logged in with your Adobe ID.From the main Firefly page, you will find a number of modules for different AI-driven tasks:Image 4: Locate the Generative recolor Firefly moduleLet’s explore Generative recolor in Firefly:1.     You’ll want to locate the module named Generative recolor.2.     Click the Generate button to get started.You are taken to an intermediate view where you are able to upload your chosen SVG file for the purposes of vector recolor based upon a descriptive text prompt:Image 5: The Upload SVG button prompt appears, along with sample files3.     Click the Upload SVG button and choose cat.svg from your file system. Of course, you can use any SVG file you want if you have another in mind. If you do not have an SVG file you’d like to use, you can click on any of the samples presented below the Upload SVG button to load one up into the module.The SVG is uploaded and a control appears which displays a preview of your file along with a text input where you can write a short text prompt describing the color palette you’d like to generate:Image 6: The Generative recolor input requests a text prompt4.     Think of some descriptive words for an interesting color palette and type them into the text input. I’ll input the following simple prompt for this demonstration: “northern lights”.5.     Click the Generate button when ready.You are taken into the primary Generative recolor user interface experience and a set of four color variants is immediately available for preview:Image 7: The Firefly Generative recolor user interfaceThe interface appears similar to what you might have seen in other Firefly modules – but there are some key differences here, since we are dealing with recoloring vector artwork.The larger, left-most area contains a set of four recolor variants to choose from. Below this is the prompt input area which displays the current text prompt and a Refresh button that allows the generation of additional variants when the prompt is updated. To the right of this area are presented various additional options within a clean user interface that scrolls vertically. Let’s explore these from top to bottom.The first thing you’ll see is a thumbnail of your original artwork with the ability to replace the present SVG with a new file:Image 8: You can replace your artwork by uploading a new SVG fileDirectly below this, you will find a set of sample prompts that can be applied to your artwork:Image 9: Sample prompts can provide immediate resultsClicking upon any of these thumbnails will immediately overwrite the existing prompt and cause a refresh – generating a new set of four recolor variants.Next, is a dropdown selection which allows the choice of color harmony:Image 10: A number of color harmonies are availableChoosing to align the recolor prompt with a color harmony will impact which colors are being used based off a combination of the raw prompt – guided by harmonization rules. An indicator will be added along with the text prompt.For more information about color and color harmonies, check out Understanding color: A visual guide – from Adobe.Below is a set of eighteen color swatches to choose from:Image 11: Color chips can add bias to your text promptClicking on any of these swatches will add that color to the options below your text prompt to help guide the recolor process. You can select one or many of these swatches to use.Finally, at the very bottom of this area is a toggle switch that allows you to either preserve black and white colors in your artwork or to recolor them just like any other color:Image 12: You can choose to preserve black and white during a recolor session or notThat is everything along the right-hand side of the interface. We’ll return to this area shortly – but for now… let’s see the options that appear when hovering the mouse cursor over any of the four recolor variants:Image 13: The Generative recolor overlayHovering over a recolor variant will reveal a number of options:       Prominent colors: Displays the colors used in this recolor variant.       Shuffle colors: Will use the same colors… but distribute them differently across the vector artwork.       Options: Copy to clipboard is the only option that is available via this menu.       Download: Enables the download of this particular recolor variant.       Rate this result: Provide a positive or negative rating of this result.We’ll make use of the Download option in a bit – but first… let’s make use of some of the choices present in the right side panel to modify and guide our recolor.Modifying the PromptYou can always change the text prompt however you wish and click the Refresh button to generate a different set of variants. Let’s instead keep this same text prompt but see how various choices can impact how it affects the recolor results:Image 14: A modified prompt box with options addedFocus again on the right side of the user interface and make the following selections:1.     Select a color harmony: Complementary2.     Choose a couple of colors to weight the prompt: Green and Blue violet3.     Disable the Preserve black and white toggle4.     Click the Refresh button to see the results of these optionsA new set of four recolor variants is produced. This set of variants is guided by the extra choices we made and is vastly different from the original set which was recolored solely based upon the text prompt:Image 15: A new set of recolor variations is generatedPlay with the various options on your own to see what kind of variations you can achieve in the artwork.Downloading your Recolored ArtworkOnce you are happy with one of the generated recolored variants, you’ll want to download it for use elsewhere. Click the Download button in the upper right of the selected variant to begin the download process for your recolored SVG file.The recolored SVG file is immediately downloaded to your computer. Note that unlike other content generated with Firefly, files created with Generative recolor do not contain a Firefly watermark or badge:Image 17: The resulting recolored SVG fileThat’s all there is to it! You can continue creating more recolor variants and freely download any that you find particularly interesting.Before we conclude… note that another good use for Generative recolor – similar to most applications of AI – is for ideation. If you are stuck with a creative block when trying to decide on a color palette for something you are designing… Firefly can help kick-start that process for you.Author BioJoseph is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023 
Read more
  • 0
  • 0
  • 14290
article-image-co-pilot-microsoft-fabric-for-power-bi
Sagar Lad
23 Aug 2023
8 min read
Save for later

Co-Pilot & Microsoft Fabric for Power BI

Sagar Lad
23 Aug 2023
8 min read
IntroductionMicrosoft's data platform solution for the modern era is called Fabric. Microsoft's three primary data analytics tools:  Power BI, Azure Data Factory, and Azure Synapse all covered under Fabric. Advanced artificial intelligence capabilities built on machine learning and natural language processing (NLP) are made available to Power BI customers through Copilot. In this article, we will deep dive into how co-pilot and Microsoft Fabric will transform the way we develop and work with Power BI.Co-Pilot and Fabric with Power BIThe urgent requirement for businesses to turn their data into value is something that both Microsoft Fabric and Copilot aspire to address. Big Data continues to fall short of its initial promises even after years have passed. Every year, businesses generate more data, yet a recent IBM study found that 90% of this data is never successfully exploited for any kind of strategic purpose. So, more data does not mean more value or business insight. Data fragmentation and poor data quality are the key obstacles to releasing the value of data. These problems are what Microsoft hopes to address with Microsoft Fabric, a human-centric, end-to-end analytics product that brings together all of an organization's data and analytics in one place. Copilot has now been integrated into Power BI. Large multi-modal artificial intelligence models based on natural language processing have gained attention since the publication of ChatGPT. Beyond casuistry, Microsoft Fabric and Copilot share a trait in that they each want to transform the Power BI user interface.●       Microsoft Fabric and Power BIMicrosoft Fabric is just Synapse and Power BI together. By combining the benefits of the Power BI SaaS platform with the various Synapse workload types, Microsoft Fabric creates an environment that is more cohesive, integrated, and easier to use for all of the associated profiles. However, Power BI Premium users will get access to new opportunities for data science, data engineering, etc. Power BI will continue to function as it does right now. Data analysts and Power BI developers are not required to begin using Synapse Data Warehouse if they do not want to. Microsoft wants to combine all of its data offerings into one, called Fabric, just like it did with Office 365:Image 1: Microsoft Fabric (Source: Microsoft)Let’s understand in detail how Microsoft Fabric will make life easier for Power BI developers.1.     Data IngestionThere are various methods by which we can connect to data sources in Fabric in order to consume data. For example, utilising Spark notebooks or pipelines, for instance. This may be unknown to the Power BI realm, though.                                                       Image 2: Data Transformation in Power BI Instead, we can ingest the data using dataflows gen2, which will save it on OneLake in the proper format.2.     Ad Hoc Query One or more dataflows successfully published and refreshed will show in the workspace along with a number of other artifacts. The SQL Endpoint artifact is one of them. We can begin creating on-demand SQL queries and saving them as views after you open them. As an alternative, we can also create visual queries which will enable us to familiarise ourselves with the data flow diagram view. Above all, however, is the fact that this interface shares many characteristics with Power BI Data Marts, making it a familiar environment for those familiar with Power BI:   Image 3: Power BI - One Data Lake Hub3.     Data ModellingWith the introduction of web modelling for Power BI, we can introduce new metrics and start establishing linkages between different tables right away in this interface. The default workspace where the default dataset is kept will automatically contain the data model. The new storage option Direct Lake is advantageous for the datasets created in this manner via the cloud interface. By having just one copy of data in OneLake, this storage style prevents data duplication and unnecessary data refreshes.●       Co-Pilot and Power BI Copilot, a new artificial intelligence framework for Power BI is an offering from Microsoft. CoPilot is Power BI's expensive multimodal artificial intelligence model that is built on natural language processing. It might be compared to the ChatGPT of Power BI. Users will be able to ask inquiries about data, generate graphics, and DAX measures by providing a brief description of what they need thanks to the addition of Copilot to Power BI. For instance, it demonstrates how a brief statement of the user's preferences for the report:"Add a table of the top 500 MNC IT Companies by total sales to my model”.The DAX code required to generate measures and tables is generated automatically by the algorithm.Copilot enables:●       Power BI reports can be created and customized to provide insights.●       Create and improve DAX computations.●       Inquire about your data.●       Publish narrative summaries.●       Ease of Use●       Faster Time to Market Key Features of the Power BI Copilot are as follows: ●       Automated report generationCopilot can create well-designed dashboards, data narratives, and interactive components automatically, saving time and effort compared to manually creating reports.●       Conversational language interfaceWe can use everyday language to express data requests and inquiries, making it simpler to connect with your data and gain insights. ●        Real-time analyticsCopilot's real-time analytics capabilities can be used by Power BI customers to view data and react swiftly to shifts and trends. Let’s look at the step-by-step process on how to use Copilot for Power BI:Step 1: Open Power BI and go to the Copilot tab screen,Step 2:  Type a query pertaining to the data for example to produce a financial report or pick from a list of suggestions that Copilot has automatically prepared for you.Step 3: Copilot sorts through and analyses data to provide the information.Step 4: Copilot compiles a visually stunning report, successfully converting complex data into easily comprehended, practical information.Step 5: Investigate data even more by posing queries, writing summaries to present to stakeholders, and more. There are also a few limitations to using the Copilot features with Power BI: ●       Reliability for the recommendationsAll programming languages that are available in public sources have been taught to Copilot, ensuring the quality of its proposals. The quantity of the training dataset that is accessible for that language, however, may have an impact on the quality of the suggestions. APL, Erlang, and other specialized programming languages' suggestions won't be as useful as those for more widely used ones like Python, Java, etc.●       Privacy and security issuesThere are worries that the model, which was trained on publicly accessible code, can unintentionally recommend code fragments that have security flaws or were intended to be private.●       Dependence on comments and namingThe user is responsible for accuracy because the AI can provide suggestions that are more accurate when given specific comments and descriptive variable names.●       Lack of original solutions and inability to creatively solve problems. Unlike a human developer, the tool is unable to do either. It can only make code suggestions based on the training data.●       Inefficient codebaseThe tool is not designed for going through and comprehending big codebases. It works best when recommending code for straightforward tasks.ConclusionThe combination of Microsoft Copilot and Fabric with Power BI has the ability to completely alter the data modelling field. It blends sophisticated generative AI with data to speed up the discovery and sharing of insights by everyone. By enabling both data engineers and non-technical people to examine data using AI models, it is transforming Power BI into a human-centered analytics platform. Author Bio: Sagar Lad is a Cloud Data Solution Architect with a leading organization and has deep expertise in designing and building Enterprise-grade Intelligent Azure Data and Analytics Solutions. He is a published author, content writer, Microsoft Certified Trainer, and C# Corner MVP.Medium , Amazon , LinkedIn   
Read more
  • 0
  • 0
  • 14058

article-image-adobe-firefly-feature-deep-dive
Joseph Labrecque
23 Aug 2023
9 min read
Save for later

Adobe Firefly Feature Deep Dive

Joseph Labrecque
23 Aug 2023
9 min read
Adobe FireflyAdobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ.  Image 1: Adobe FireflyFor more information about Firefly, have a look at the previous articles in this series:       Animating Adobe Firefly Content with Adobe Animate       Exploring Text to Image with Adobe Firefly       Generating Text Effects with Adobe FireflyIn this article, we’ll be exploring some of the more detailed features of Firefly in general. While we will be doing so from the perspective of the text-to-image module, much of what we cover will be applicable to other modules and procedures as well.Before moving on to the visual controls and options… let’s consider accessibility. Here is what Adobe has to say about accessibility within Firefly:Firefly is committed to providing accessible and inclusive features to all individuals, including users working with assistive devices such as speech recognition software and screen readers. Firefly is continuously enhanced to strive to meet the needs of all types of users, including individuals with visual, hearing, cognitive, motor, or other impairments, and is designed to conform to worldwide accessibility standards. -- AdobeYou can use the following keyboard shortcuts across the Firefly interface to navigate and control the software in a non-visual way:       Tab: navigates between user interface controls.       Space/Enter: activates buttons.       Enter: activates links.       Arrow Keys: navigates between options.       Space: selects options.As with most accessibility concerns and practices, these additional controls within Firefly can benefit those users who are not otherwise impaired as well – similar to sight-enabled users making use of captions when watching video-based content.For our exploration of the various additional controls and options within Firefly, we’ll start off with a generated set of images based on a prompt. To review how to achieve this, have a look at the article “Exploring Text to Image with Adobe Firefly”.Choose one of the generated images to work with and hover your mouse across the image to reveal a set of controls.Image 2: Image Overlay OptionsWe will explore each of these options one by one as we continue along with this article.Rating and Feedback OptionsAdobe is very open to feedback with Firefly. One reason is to get general user feedback to improve the experience of using the product… and the other is to influence the generative models so that users receive the output that is expected.Giving a simple thumbs-up or thumbs-down is the most basic level of feedback and is meant to rate the results of your prompt.Image 3: Rating the generated resultsOnce you provide a thumbs-up or thumbs-down… the overlay changes to request additional feedback. You don’t necessarily need to provide more feedback – but clicking on the Feedback button will allow you to go more in-depth in terms of why you provided the initial rating.Image 4: Additional feedback promptClicking the Feedback button will summon a much larger overlay where you can make choices via a checkbox as to why you rated the results the way you did. You also have the option to put a little note in here as well.Image 5: Additional feedback formClicking the Submit Feedback button or the Cancel button will close the overlay and bring you back to the experience.Additionally, there is an option to Report the image to Adobe. This is always a negative action – meaning that you find the results offensive or inappropriate in some way.Image 6: Report promptClicking on the Report option will summon a similar form to that of additional feedback, but the options will, of course, be different.Image 7: Report feedback formHere, you can report via a checkbox and add an optional note as part of the report. Adobe has committed to making sure that violence and things like copyrighted or trademarked characters and such are not generated by Firefly.For instance, if you use a prompt such as “Micky Mouse murdering a construction worker with a chainsaw”… you will receive a message like the following:Image 8: Firefly will not render trademarked characters or violenceWith Adobe is being massively careful in filtering certain words right now… I do hope in the future that users will be able to selectively choose exclusions in place of a general list of censored terms as exists now. While the prompt above is meant to be absurd – there are legitimate artistic reasons for many of the word categories which are currently banned.General Image ControlsThe controls in this section include some of the most used in Firefly at the moment – including the ability to download your generated image.Image 9: Image optionsWe have the following controls exposed, from left to right they are named:       Options       Download       FavoriteOptionsStarting at the left-hand side of this group of controls, we begin with an ellipse that represents Options which, when clicked, will summon a small overlay with additional choices.Image 10: Expanded optionsThe menu that appears includes the following items:1.     Submit to Firefly gallery2.     Use as a reference image3.     Copy to the clipboardLet’s examine each of these in detail.You may have noticed that the main navigation of the Firefly website includes a number of options: Home, Gallery, Favorites, About, and FAQ. The Gallery section contains generated images that users have submitted to be featured on this page.Clicking the Submit to Firefly gallery option will summon a submission overlay through which you can request that your image is included in the Gallery.Image 11: Firefly Gallery submissionSimply read over the details and click Continue or Cancel to return.The second item, Use as reference image, brings up a small overlay that includes the selected image to use as a reference along with a strength slider.Image 12: Reference image sliderMoving the slider to the left will favor the reference image and moving it to the right will favor the raw prompt instead. You must click the Generate button after adjusting the slider to see its effect.The final option is Copy to clipboard – which does exactly as you’d expect. Note that Content Credentials are applied in this case just the same as they are when downloading an image. You can read more about this feature in the Firefly FAQ.DownloadBack up to the set of three controls, the middle option allows you to initiate a Download of the selected image. As Firefly begins preparing the image for download, a small overlay dialog appears.Image 13: Download applies content credentials – similar to the Copy to clipboard optionFirefly applies metadata to any generated image in the form of content credentials and the image download process begins. We’ve covered exactly what this means in previous articles. The image is then downloaded to your local file system.FavoriteClicking the Favorite control will add the generated image to your Firefly Favorites so that you can return to the generated set of images for further manipulation or to download later on.Image 14: Adding a favoriteThe Favorite control works as a toggle. Once you declare a favorite, the heart icon will appear filled and the control will allow you to un-favorite the selected image instead.That covers the main set of controls which overlay the right of your image – but there is a smaller set of controls on the left that we must explore as well.Additional Manipulation OptionsThe alternative set of controls numbers only two – but they are both very powerful. To the left is the Show similar control and to the right is Generative fill.Image 15: Show similar and Generative fill controlsClicking upon the Show similar control will retain the particular, chosen image while regenerating the other three to be more in conformity with the image specified.Image 16: Show similar will refresh the other three imagesAs you can see when comparing the sets of images in the figures above and below… you can have great influence over your set of generated images through this control.Image 17: The original image stays the sameThe final control we will examine in this article is Generative fill. It is located right next to the Show similar control.The generative fill view presents us with a separate view and a number of all-new tools for making selections in order to add or remove content from our images.Image 18: Generative fill brings you to a different view altogetherGenerative fill is actually its own proper procedure in Adobe Firefly… and we’ll explore how to use this feature in full - in the next article! Author BioJoseph Labrecque is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023 
Read more
  • 0
  • 0
  • 14018

article-image-getting-started-with-ai-builder
Adeel Khan
23 Oct 2023
9 min read
Save for later

Getting Started with AI Builder

Adeel Khan
23 Oct 2023
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!Introduction  AI is transforming the way businesses operate, enabling them to improve efficiency, reduce costs, and enhance customer satisfaction. However, building and deploying AI solutions can be challenging, even at times for pro developers due to the inherent complexity of traditional tools. That’s where Microsoft AI Builder comes in. AI Builder is a low-code AI platform that empowers pro developers to infuse AI into business workflows without writing a single line of code. AI Builder is integrated with Microsoft Power Platform, a suite of tools that allows users to build apps, automate processes, and analyze data. With AI Builder, users can leverage pre-built or custom AI models to enhance their Power Apps and Power Automate solutions. One of the most powerful features of AI Builder is the prediction model, which allows users to create AI models that can predict outcomes based on historical data. The prediction model can be used to predict the following outcomes,  Binary outcome, choice between one value. An example would be booking status, canceled/redeemed.  Multiple outcomes, a choice between multiple yet fixed outcomes. An example would be the Stage of Delivery, early/on-time/delayed/escalated.  Numeric outcomes, a number value. An example would be revenue per customer. In this blog post, we will show you how to create and use a prediction model with AI Builder using our business data. We will focus on Numeric outcomes and use the example mentioned above, we will attempt to predict the possible revenue we can generate from customers in a lifetime. Let’s get started! Getting Data Ready The process of building a model begins with data. We will not cover the AI builder prerequisites in the chapter but you can easily find them at Microsoft learn. The data in focus is sample data of customer profiles from the retailer system. The data include basic profile details such as (education, marital status, customer since, kids at home, teens at home), interaction data (participation in the campaign), and transaction summary (purchases both online and offline, product categories)  The data needs to be either imported in Dataverse or already existing. In this case, we will import the file “Customer_profile_sample.xls”. To import the data, the user should perform the following actions.  1. Open http://make.powerapps.com  and log in to your power platform environment.  2. Select the right environment, and recommend performing these actions in a development environment.  3. From the left menu pan select table . 4. Now select the option upload from excel. This will start a data import process.  Figure 1 Upload data in dataverse from excel file. 5. Upload the Excel file mentioned above “Customer_profile_sample.xls.” The system will read the file content and give a summary of the data in the file. Note if your environment has the copilot feature on, you will see a GPT in action where it will not only get the details of the file but also choose the table name and add descriptions to columns as well.   Figure 2 Copilot in action with file Summary 6. Verify the details, make sure the table is selected as “Customer Profile” and the Primary column is “ID.” Once verified, click Create and let the system upload the data into this new table.   The system will move you to the table view screen.   Figure 3 Table View Screen  7. In this screen, lets click on Columns under the Schema section. This will take us to the column list. here we need to scroll down and find a column called “Revenue.” Right-click the column and select edit.   Figure 4 Updating column information.  8. Let's check the feature searchable and save the changes.   9. We will move back all the way to the table list, by clicking on Table in the left navigation. Here we will select our “Customer Profile” table and choose Publish from the top menu. This will apply to the change made in step 8. We will wait till we see a green bar with the message “Publish completed.”   This concludes our first part of getting the sample data imported. Creating a Model Now that we have our data ready and available in dataverse, let's start building our model. We will follow the next set of actions to deliver the model with this low code / no code tool. 1. The first step is to open AI Builder. To open AI Builder Studio, let go to http://make.powerapps.com. 2. From the left navigation, click on AI Models . This will open the AI model studio.  3. Select from the top navigation bar. There are many OOB models for various business use cases that developers can choose but this time we will select a prediction model from the options.   Figure 5 Prediction Model Icon 4. The next pop-up screen will provide details about the prediction model feature and how it can used. Select to begin the model creation process. The model creation process is a step journey that we will explain one by one.  5. The first action is to select the historical outcome. Here we need to select the table we created in the above section “Customer Profile” and the column (Label) we want the model to predict, in this case, “revenue.”  Figure 6 Step one - Historical Outcome Selection 6. The next step is the critical step in any classification model. it is called the feature selection. In this step, we will select the columns to make sure we provide enough information to our AI model so it can assess the impact and influence of these features and train itself. The table has now 33 columns (27 we imported from the sample file and 5 added as part of the dataverse process). We will select 27 columns again as the most important feature for this model. The ones we will not select are.  Created On: it is a date column created by dataverse to track the record creation date. Not relevant in predicting revenue.  ID: it is a numerical sequential number, again we can decide with confidence that it is not going to be relevant in predicting our label “revenue.” Record Created On: Dataverse added column.  Revenue (base): a base currency value.  UTC Conversion Time zone: Dataverse added column. Before moving to next step make sure that you can see 27 columns selected.  Figure 7 Selecting Features / Columns 7. The next step is to choose the training data with business logic. If you would have noticed, our original imported data contains some rows where the revenue field is empty. Such data would not be helpful to train the model. Hence, we would like a model to train on rows that have revenue information available. We can do so by selecting “Filter the Data” and then adding the condition row as shown in the below figure.  Figure 8 Selecting the right dataset. 8. Finally, we are our last step of verification, here will perform one last action before training the model, that is to give this model proper name. let's click on an icon to change the name of the model. We shall name the model “Prediction – Revenue.”  Figure 9 Renaming the Model 9. Let’s click on and begin model training.  Evaluation of model The ultimate step of any model creation is the assessment of the model. Once our model is ready and trained, the system will generate model performance details. These details can be accessed by clicking on the model from AI Studio. Let's evaluate and read into our model.   Figure 10 Model Performance Summary PerformanceAI builder grade models based on model R-squared (goodness of fit). An R-squared value of 88% for a model means that 88% of the variation in revenue can be explained by the model’s inputs. The remaining 12% could be due to other factors not included in the model. For a set of information provided, it is a good start and, in some cases, an acceptable outcome as well.  Most Influential data The model also explains the most influential feature to our outcome “revenue.” In this case, Monthly Wine purchase (MntWines) is the highest weighted and suggests the highest association with revenue an organization can make from a customer. These weights can trigger a lot of business ideation and improve business KPIs further.  WarningsIn the detail section, you can also view the warnings the system has generated. In this case, it has identified a few columns that we intentionally selected in our earlier steps as having no association with revenue. This information can be used to further fine-tune and remove unnecessary features from our training and feature selection that were explained earlier.   Figure 11 Warning Tab in Details ConclusionThis marks the completion of our model preparation. Once we are satisfied with the model performance, we can choose to Publish this model. The model then can be used either through Power Apps or Power Automate to predict the revenue and reflect in dataverse. This feature of AI Builder opens the door to so many possibilities and the ability to deliver it in a short duration of time makes it extremely useful. Keep experimenting and keep learning.  Author BioMohammad Adeel Khan is a Senior Technical Specialist at Microsoft. A seasoned professional with over 19 years of experience with various technologies and digital transformation projects. At work , he engages with enterprise customers across geographies and helps them accelerate digital transformation using Microsoft Business Applications , Data, and AI solutions. In his spare time, he collaborates with like-minded and helps solve business problems for Nonprofit organizations using technology.  Adeel is also known for his unique approach to learning and development. During the COVID-19 lockdown, he introduced his 10-year-old twins to Microsoft Learn. The twins not only developed their first Microsoft Power Platform app—an expense tracker—but also became one of the youngest twins to earn the Microsoft Power Platform certification. 
Read more
  • 0
  • 0
  • 13840
article-image-getting-started-with-aws-codewhisperer
Rohan Chikorde
23 Aug 2023
11 min read
Save for later

Getting Started with AWS CodeWhisperer

Rohan Chikorde
23 Aug 2023
11 min read
IntroductionEfficiently writing secure, high-quality code within tight deadlines remains a constant challenge in today's fast-paced software development landscape. Developers often face repetitive tasks, code snippet searches, and the need to adhere to best practices across various programming languages and frameworks. However, AWS CodeWhisperer, an innovative AI-powered coding companion, aims to transform the way developers work. In this blog, we will explore the extensive features, benefits, and setup process of AWS CodeWhisperer, providing detailed insights and examples for technical professionals.At its core, CodeWhisperer leverages machine learning and natural language processing to deliver real-time code suggestions and streamline the development workflow. Seamlessly integrated with popular IDEs such as Visual Studio Code, IntelliJ IDEA, and AWS Cloud9, CodeWhisperer enables developers to remain focused and productive within their preferred coding environment. By eliminating the need to switch between tools and external resources, CodeWhisperer accelerates coding tasks and enhances overall productivity.A standout feature of CodeWhisperer is its ability to generate code from natural language comments. Developers can now write plain English comments describing a specific task, and CodeWhisperer automatically analyses the comment, identifies relevant cloud services and libraries, and generates code snippets directly within the IDE. This not only saves time but also allows developers to concentrate on solving business problems rather than getting entangled in mundane coding tasks.In addition to code generation, CodeWhisperer offers advanced features such as real-time code completion, intelligent refactoring suggestions, and error detection. By analyzing code patterns, industry best practices, and a vast code repository, CodeWhisperer provides contextually relevant and intelligent suggestions. Its versatility extends to multiple programming languages, including Python, Java, JavaScript, TypeScript, C#, Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala, making it a valuable tool for developers across various language stacks.AWS CodeWhisperer addresses the need for developer productivity tools by streamlining the coding process and enhancing efficiency. With its AI-driven capabilities, CodeWhisperer empowers developers to write clean, efficient, and high-quality code. By supporting a wide range of programming languages and integrating with popular IDEs, CodeWhisperer caters to diverse development scenarios and enables developers to unlock their full potential. Embrace the power of AWS CodeWhisperer and experience a new level of productivity and coding efficiency in your development journey.Key Features and Benefits of CodeWhisperer A. Real-time code suggestions and completionCodeWhisperer provides developers with real-time code suggestions and completion, significantly enhancing their coding experience. As developers write code, CodeWhisperer's AI-powered engine analyzes the context and provides intelligent suggestions for function names, variable declarations, method invocations, and more. This feature helps developers write code faster, with fewer errors, and improves overall code quality. By eliminating the need to constantly refer to documentation or search for code examples, CodeWhisperer streamlines the coding process and boosts productivity.B. Intelligent code generation from natural language commentsOne of the standout features of CodeWhisperer is its ability to generate code snippets from natural language comments. Developers can simply write plain English comments describing a specific task, and CodeWhisperer automatically understands the intent and generates the corresponding code. This powerful capability saves developers time and effort, as they can focus on articulating their requirements in natural language rather than diving into the details of code implementation. With CodeWhisperer, developers can easily translate their high-level concepts into working code, making the development process more intuitive and efficient.C. Streamlining routine or time-consuming tasksCodeWhisperer excels at automating routine or time-consuming tasks that developers often encounter during the development process. From file manipulation and data processing to API integrations and unit test creation, CodeWhisperer provides ready-to-use code snippets that accelerate these tasks. By leveraging CodeWhisperer's automated code generation capabilities, developers can focus on higher-level problem-solving and innovation, rather than getting caught up in repetitive coding tasks. This streamlining of routine tasks allows developers to work more efficiently and deliver results faster.D. Leveraging AWS APIs and best practicesAs an AWS service, CodeWhisperer is specifically designed to assist developers in leveraging the power of AWS services and best practices. It provides code recommendations tailored to AWS application programming interfaces (APIs), allowing developers to efficiently interact with services such as Amazon EC2, Lambda, and Amazon S3. CodeWhisperer ensures that developers follow AWS best practices by providing code snippets that adhere to security measures, performance optimizations, and scalability considerations. By integrating AWS expertise directly into the coding process, CodeWhisperer empowers developers to build robust and reliable applications on the AWS platform.E. Enhanced security scanning and vulnerability detectionSecurity is a top priority in software development, and CodeWhisperer offers enhanced security scanning and vulnerability detection capabilities. It automatically scans both generated and developer-written code to identify potential security vulnerabilities. By leveraging industry-standard security guidelines and knowledge, CodeWhisperer helps developers identify and remediate security issues early in the development process. This proactive approach to security ensures that code is written with security in mind, reducing the risk of vulnerabilities and strengthening the overall security posture of applications.F. Responsible AI practices to address bias and open-source usageAWS CodeWhisperer is committed to responsible AI practices and addresses potential bias and open-source usage concerns. The AI models behind CodeWhisperer are trained on vast amounts of publicly available code, ensuring accuracy and relevance in code recommendations. However, CodeWhisperer goes beyond accuracy and actively filters out biased or unfair code recommendations, promoting inclusive coding practices. Additionally, it provides reference tracking to identify code recommendations that resemble specific open source training data, allowing developers to make informed decisions and attribute sources appropriately. By focusing on responsible AI practices, CodeWhisperer ensures that developers can trust the code suggestions and recommendations it provides.Setting up CodeWhisperer for individual developersIf you are an individual developer who has acquired CodeWhisperer independently and will be using AWS Builder ID for login, follow these steps to access CodeWhisperer from your JetBrains IDE:1.      Ensure that the AWS Toolkit for JetBrains is installed. If it is not already installed, you can install it from the JetBrains plugin marketplace.2.      In your JetBrains IDE, navigate to the edge of the window and click on the AWS Toolkit icon. This will open the AWS Toolkit for the JetBrains panel:3. Within the AWS Toolkit for JetBrains panel, click on the Developer Tools tab. This will open the Developer Tools Explorer.4. In the Developer Tools Explorer, locate the CodeWhisperer section and expand it. Then, select the "Start" option:5. A pop-up window titled "CodeWhisperer: Add a Connection to AWS" will appear. In this window, choose the "Use a personal email to sign up" option to sign in with your AWS Builder ID.6. Once you have entered your personal email associated with your AWS Builder ID, click on the "Connect" button to establish the connection and access CodeWhisperer within your JetBrains IDE:7.      A pop-up titled "Sign in with AWS Builder ID" will appear. Select the "Open and Copy Code" option.8.      A new browser tab will open, displaying the "Authorize request" window. The copied code should already be in your clipboard. Paste the code into the appropriate field and click "Next."9.      Another browser tab will open, directing you to the "Create AWS Builder ID" page. Enter your email address and click "Next." A field for your name will appear. Enter your name and click "Next." AWS will send a confirmation code to the email address you provided.10.   On the email verification screen, enter the code and click "Verify." On the "Choose your password" screen, enter a password, confirm it, and click "Create AWS Builder ID." A new browser tab will open, asking for your permission to allow JetBrains to access your data. Click "Allow."11.   Another browser tab will open, asking if you want to grant access to the AWS Toolkit for JetBrains to access your data. If you agree, click "Allow."12.   Return to your JetBrains IDE to continue the setup process. CodeWhisperer in ActionExample Use Case: Automating Unit Test Generation with CodeWhisperer in Python (Credits: aws-solutions-library-samples):One of the powerful use cases of CodeWhisperer is its ability to automate the generation of unit test code. By leveraging natural language comments, CodeWhisperer can recommend unit test code that aligns with your implementation code. This feature significantly simplifies the process of writing repetitive unit test code and improves overall code coverage.To demonstrate this capability, let's walk through an example using Python in Visual Studio Code:        Begin by opening an empty directory in your Visual Studio Code IDE.        (Optional) In the terminal, create a new Python virtual environment:python3 -m venv .venvsource .venv/bin/activate        Set up your Python environment and ensure that the necessary dependencies are installed.pip install pytest pytest-cov               Create a new file in your preferred Python editor or IDE and name it "calculator.py".       Add the following comment at the beginning of the file to indicate your intention to create a simple calculator class:   # example Python class for a simple calculator       Once you've added the comment, press the "Enter" key to proceed.       CodeWhisperer will analyze your comment and start generating code suggestions based on the desired functionality.      To accept the suggested code, simply press the "Tab" key in your editor or IDE.                                                            Picture Credit: aws-solutions-library-samplesIn case CodeWhisperer does not provide automatic suggestions, you can manually trigger CodeWhisperer to generate recommendations using the following keyboard shortcuts:For Windows/Linux users, press "Alt + C".For macOS users, press "Option + C".If you want to view additional suggestions, you can navigate through them by pressing the Right arrow key. On the other hand, to access previous suggestions, simply press the Left arrow key. If you wish to reject a recommendation, you can either press the ESC key or use the backspace/delete key.To continue building the calculator class, proceed by selecting the Enter key and accepting CodeWhisperer's suggestions, whether they are provided automatically or triggered manually. CodeWhisperer will propose basic functions for the calculator class, including add(), subtract(), multiply(), and divide(). In addition to these fundamental operations, it can also suggest more advanced functions like square(), cube(), and square_root().By following these steps, you can leverage CodeWhisperer to enhance your coding workflow and efficiently develop the calculator class, benefiting from a range of pre-generated functions tailored to your specific needs.ConclusionAWS CodeWhisperer is a groundbreaking tool that has the potential to revolutionize the way developers work. By harnessing the power of AI, CodeWhisperer provides real-time code suggestions and automates repetitive tasks, enabling developers to focus on solving core business problems. With seamless integration into popular IDEs and support for multiple programming languages, CodeWhisperer offers a comprehensive solution for developers across different domains. By leveraging CodeWhisperer's advanced features, developers can enhance their productivity, reduce errors, and ensure the delivery of high-quality code. As CodeWhisperer continues to evolve, it holds the promise of driving accelerated software development and fostering innovation in the developer community.Author BioRohan Chikorde is an accomplished AI Architect professional with a post-graduate in Machine Learning and Artificial Intelligence. With almost a decade of experience, he has successfully developed deep learning and machine learning models for various business applications. Rohan's expertise spans multiple domains, and he excels in programming languages such as R and Python, as well as analytics techniques like regression analysis and data mining. In addition to his technical prowess, he is an effective communicator, mentor, and team leader. Rohan's passion lies in machine learning, deep learning, and computer vision.LinkedIn
Read more
  • 0
  • 0
  • 13839

article-image-generative-fill-with-adobe-firefly-part-i
Joseph Labrecque
24 Aug 2023
8 min read
Save for later

Generative Fill with Adobe Firefly (Part I)

Joseph Labrecque
24 Aug 2023
8 min read
Adobe Firefly AI Overview Adobe Firefly is a new set of generative AI tools that can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ.    Image 1: Adobe Firefly For more information about the usage of Firefly to generate images, text effects, and more… have a look at the previous articles in this series:  Animating Adobe Firefly Content with Adobe Animate  Exploring Text to Image with Adobe Firefly  Generating Text Effects with Adobe Firefly  Adobe Firefly Feature Deep Dive In the next two articles, we’ll continue our exploration of Firefly with the Generative fill module. We’ll begin with an overview of accessing Generative fill from a generated image and then explore how to use the module on our own personal images.  Recall from a previous article Exploring Text to Image with Adobe Firefly that when you hover your mouse cursor over a generated image – overlay controls will appear.  Image 2: Generative fill overlay control from Text to image  One of the controls in the upper right of the image frame will invoke the Generative fill module and pass the generated image into that view.   Image 3: The generated image is sent to the Generative fill module Within the Generative fill module, you can use any of the tools and workflows that are available when invoking Generative fill from the Firefly website. The only difference is that you are passing in a generated image rather than uploading an image from your local hard drive.  Keep this in mind as we continue to explore the basics of Generative fill in Firefly – as we’ll begin the process from scratch. Generative Fill When you first enter the Firefly web experience, you will be presented with the various workflows available.  These appear as UI cards and present a sample image, the name of the procedure, a procedure description, and either a button to begin the process or a label stating that it is “in exploration”. Those which are in exploration are not yet available to general users. We want to locate the Generative fill module and click the Generate button to enter the experience.   Image 4: The Generative fill module card From there, you’ll be taken to a view that prompts you to upload an image into the module. Firefly also presents a set of sample images you can load into the experience.    Image 5: Generative fill getting started promptly Clicking the Upload image button summons a file browser for you to locate the file you want to use Generative fill on. In my example, I’ll be using a photograph of my cat, Poe. You can download the photograph of Poe [[ NOTE – LINK TO FILE Poe.jpg ]] to work with as well.   Image 6: The photograph of Poe, a cat Once the image file has been uploaded into Firefly, you will be taken to the Generative fill user experience and the photograph will be visible. Note that this is exactly the same experience as when entering Generative fill from a prompt-generated image as we saw above. The only real difference is how we get to this point.   Image 7: The photograph is loaded into Generative fill You will note that there are two sets of tools available within the experience. One set is along the left side of the screen and includes Insert, Remove, and Pan tools.   Image 8: Insert, Remove, and Pan Switching between the Insert and Remove tools changes the function of the current process. The Pan tool allows you to pan the image around the view.  Along the bottom of the screen is the second set of tools – which are focused on selections. This set contains the Add and Subtract tools, access to Brush Settings, a Background removal process, and a selection Invert toggle.   Image 9: Add, Subtract, Brush Settings, Background removal, and selection Invert Let’s perform some Generative fill work on the photograph of Poe.  In the larger overlay along the bottom of the view, locate and click the Background option. This is an automated process that will detect and remove the background from the image loaded into Firefly.   Image 10: The background is removed from the selected photograph 2. A prompt input appears directly beneath the photograph. Type in the following prompt: “a quiet jungle at night with lots of mist and moonlight”  Image 11: Entering a prompt into the prompt input control 3. If desired, you can view and adjust the settings for the generative AI by clicking the Settings icon in the prompt input control. This summons the Settings overlay.  Image 12: The generative AI Settings overlay Within the Settings overlay, you will find there are three items that can be adjusted to influence the AI:  Match shape: You have two choices here – freeform or conform.  Preserve content: A slider that can be set to include more of the original content or produce new content. Guidance strength: A slider that can be set to provide more strength to the original image or the given prompt. I suggest leaving these at the default setting for now. 4. Click the Settings icon again to dismiss the overlay. 5. Click the Generate button to generate a background based upon the entered prompt. A new background is generated from our prompt, and it now appears as though Poe is visiting a lush jungle at night.   Image 13: Poe enjoying the jungle at night Note that the original photograph included a set of electric outlets exposed within the wall. When we removed the background, Firefly recognized that they were distinct from the general background and so retained them. The AI has taken them into account when generating the new background and has interestingly propped them up with a couple of sticks. It also has gone through and rendered a realistic shadow cast by Poe.  Before moving on, click the Cancel button to bring the transparent background back. Clicking the Keep button will commit the changes – and we do not want that as we wish to continue exploring other options. Clear out the prompt you previously wrote within the prompt input control so that there is no longer any prompt present.   Image 14: Click the Generate button with no prompt present 3. Click the Generate button without a text prompt in place. The photograph receives a different background from the one generated with a text prompt. When clicking the Generate button with no text prompt, you are basically allowing the Firefly AI to make all the decisions based solely on the visual properties of the image.   Image 15: A set of backgrounds is generated based on the remaining pixels present You can select any of the four variations that were generated from the set of preview thumbnails beneath the photograph. If you’d like Firefly to generate more variations – click the More button. Select the one you like best and click the Keep button. Okay! That’s pretty good but we are not done with Generative fill yet. We haven’t even touched the Insert and Remove functions… and there are Brush Settings to manipulate… and much more. In the next article, we’ll explore the remaining Generative fill tools and options to further manipulate the photograph of Poe.  Author BioJoseph Labrecque is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023
Read more
  • 0
  • 0
  • 13776
Modal Close icon
Modal Close icon