fbpx

Generate Embeddable HTML code for any URL using Pipfeed’s Extract API

Embeddable Cards provide a clean, responsive, and shareable card for any content on the web. Cards are the easiest way to leverage Pipfeed’s extract API for any media, Cards provide a responsive embed. 40% of Users will click, hover, or view Cards with videos, images, and rich media. Cards are responsive and adapt to automatically fit any site they are placed in.

But a lot of these embed APIs aren’t very customizable and usually results in a longer load time. Using Pipfeed’s extract API, you can generate a pure HTML code in the framework and style of your choice. For this example we will be using bootstrap cards to style the generated cards.

Extracting metadata from the article URL:

We want to get all the various fields such as the main image, title, description, etc. from the article itself. For this, we will use Pipfeed’s News Article Extract API. The API allows you 100 calls per day and 3000 calls per month for free. You can upgrade to one of our plans for a higher API call volume.

You can get API key from promptAPI here: https://promptapi.com/marketplace/description/pipfeed-api

$curl = curl_init();

curl_setopt_array($curl, array(
  CURLOPT_URL => "https://api.promptapi.com/pipfeed",
  CURLOPT_HTTPHEADER => array(
    "Content-Type: text/plain",
    "apikey: YOUR_API_KEY"
  ),
  CURLOPT_RETURNTRANSFER => true,
  CURLOPT_ENCODING => "",
  CURLOPT_MAXREDIRS => 10,
  CURLOPT_TIMEOUT => 0,
  CURLOPT_FOLLOWLOCATION => true,
  CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
  CURLOPT_CUSTOMREQUEST => "POST",
  CURLOPT_POSTFIELDS =>"https%3A%2F%2Fsystem.camp%2Fstartups%2Funderstanding-kpis-for-mobile-apps-and-how-to-measure-kpis%2F"
));

$response = curl_exec($curl);

curl_close($curl);
echo $response;

For this example we are using PHP to generate the HTML code, you can use language of your choice to generate the code.

The above will return a response like this:

{
  "authors": null,
  "blogLogoUrl": null,
  "blogName": null,
  "categories": null,
  "category": "machine-learning",
  "description": "How to create a financial model for a mobile app? How to measure KPIs? What are KPIs? Learn all this and more...",
  "html": "<div class=\"page\" id=\"readability-page-1\"><div>\n\t\n\n<p>KPIs are the ultimate indicator for how well you Mobile app is doing. KPI stands for Key Performance Indicator. The first rule of KPIs is that they need to be the \u201ckey indicators\u201d of your business model and should always relate to the financial model directly.</p>\n\n\n\n<p>Why? Because you want to quantify and improve these KPIs to help your company earn more money or get more users.</p>\n\n\n\n<p>The main goal of any app is either to make the sure users are using the app frequently or they are paying frequently. Hence it is hard to make a lot of profits from Utility apps like Calculators. These apps are really good but the overall usage is quite low and hence the Business Model will not make sense to build a company around these use-cases.</p>\n\n\n\n<p>Based on these metrics we can then define how much money the apps can make.</p>\n\n\n\n<h2 id=\"let-s-define-the-kpis-first-\">Let\u2019s define the KPIs first:</h2>\n\n\n\n<h3 id=\"retention-rate\">Retention Rate</h3>\n\n\n\n<p>Retention Rate is the most important metric for a mobile app. This defines the \u201cpercentage\u201d of users that are still using the app after a certain time has passed.</p>\n\n\n\n<p>For example: If your app has 1000 user signed up. After one month you check how many users have opened the app in the second month. So in second month if 300 users have opened the app then you retention rate is defined as :</p>\n\n\n\n<blockquote><p>300/1000 = 30%</p></blockquote>\n\n\n\n<p>So the retention rate is 30%. Industry average is around 15% to 60%. Mobile apps like Facebook, Instagram & watsapp have a retention rate of over 70% hence they have these insane valuations.</p>\n\n\n\n<h3 id=\"conversion-rate-\">Conversion Rate:</h3>\n\n\n\n<p>This metric is useful for mobile apps that offer a Subscription Model. Conversion rate in this scenario means how many users are converting to paid subscribers.</p>\n\n\n\n<p>For example: From the 300 Monthly Active Users(MAU) if 60 pay and become your subscribers then your conversion rate can be calculated as:</p>\n\n\n\n<blockquote><p>300/60 = 20%</p></blockquote>\n\n\n\n<p>Hence you have a 20% conversion rate.  Industry average is around 10%. The Harvard Business Review found that even a 5% increase in retention could increase revenues by <a rel=\"noopener\" href=\"https://amplitude.com/blog/2016/01/27/understanding-user-retention\">25% to 95%</a>.</p>\n\n\n\n<h2 id=\"cost-per-acquisition-cpa-\">Cost Per Acquisition(CPA)</h2>\n\n\n\n<p>Cost Per Acquisition is defined as the average cost to acquire a user. This cost must be averaged out over long periods spanning multiple months to get the big picture.</p>\n\n\n\n<p>The other KPI metrics will help you understand how valuable is your app. But CPA defines if your app makes sense in terms of a viable business.</p>\n\n\n\n<p>There are various channels for acquiring users:</p>\n\n\n\n<ul><li>Ads (Google, Facebook, LinkedIn, Twitter, SnapChat, TikTok, Apple etc.)</li><li>Influencer Marketing</li><li>SEO/Blog Content</li><li>Directory Listings (ProductHunt, BetaFy etc.)</li><li>Community Forums</li><li>Direct Advertising (Podcasts, Rent Websites etc.)</li><li>And hundreds more</li></ul>\n\n\n\n<h2 id=\"lets-see-how-all-these-numbers-fit-in-a-financial-model-\">Lets see how all these numbers fit in a Financial Model:</h2>\n\n\n\n<p>The model is quite straightforward once you have the KPIs.</p>\n\n\n\n<p>In the below example we are looking at apps that earn from Subscription or Monthly Recurring Revenue.</p>\n\n\n\n<h2 id=\"now-let-s-look-at-apps-that-earn-from-subscription-only-\">Now let\u2019s look at apps that earn from Subscription only.</h2>\n\n\n\n<p>In this model we are also making an assumption that people are willing to pay for the service and a significant number of users exist that are willing to pay for the service you are providing. This is what truly a startup does, finds a service people are willing to pay for and make sure that there is a large number of users willing to pay for this service. If you have this then you have a great business that investors will be happy to invest in.</p>\n\n\n\n<p>We will look at an app like Pocket.</p>\n\n\n\n<h3 id=\"assumptions-\">Assumptions:</h3>\n\n\n\n<p><strong>Retention Rate:</strong> 30%<br><strong>Conversion Rate:</strong> 10%<br><strong>Monthly Subscription Cost:</strong> $4.99<br><strong>CPA:</strong> $1/-</p>\n\n\n\n<p>To get 1000 users we spend<br>Number of Users * CPA = 1000 USD</p>\n\n\n\n<p>Monthly Active Users<br>Total Users * Retention Rate<br>1000 * 30% = 300 MAU</p>\n\n\n\n<p>Subscription earning from MAU<br>MAU * Conversion Rate * Average Subscription Cost<br>300 * 10% * 4.99 = 149.7</p>\n\n\n\n<p>Yearly Subscription earning<br>149.7 * 12 = 1796.4</p>\n\n\n\n<p>Profits<br>$1796.4 \u2013 $1000 = $794.4</p>\n\n\n\n<p>So, if we spend 1000 dollars we earn a profit of 794.4. If you can scale this system to 100,000 users then your yearly profits become: $794,00.4. This is a pretty good & viable business model that investors will be ready to invest in to help you scale.</p>\n\n\n\n<p>Pocket had around 20 million users in 2015. So if we plug the numbers in the above model we get a yearly revenue from 20 million users at $15,888,000 (over 15 million dollars). Pocket app got acquired for around 30 million by Mozilla Foundation.</p>\n\n\n\n<p>SaaS usually have a retention rate of over 80%. Retention like this is required as the Cost Per Acquisition is pretty high. That\u2019s why you will see SaaS companies offering $10 to $1000 referral commission.</p>\n\n\n\n<h2 id=\"now-let-s-look-at-apps-that-earn-from-ads-only-\">Now let\u2019s look at apps that earn from Ads only.</h2>\n\n\n\n<p>For advertising based apps the most important number is Total Users & Retention.</p>\n\n\n\n<p>In mass consumer apps finding the total retained users is a bit tricky. These apps only make sense if the Retention Curve is a \u201cSmile\u201d curve like this:</p>\n\n\n\n<figure><span data-svq-align=\"\"><img data-src=\"https://system.camp/wp-content/uploads/2020/10/retention_smile_curve.png\" data-height=\"802\" data-width=\"1133\" alt=\"\" src=\"https://system.camp/wp-content/uploads/2020/10/retention_smile_curve.png\"><span></span></span><figcaption>Evernote Retention Curve. Source: https://www.sequoiacap.com/article/retention</figcaption></figure>\n\n\n\n<p>This below retention graph is an example of long tail retention. Where some percentage of your users choose to stick around for a longer time and hence make the financial model viable.</p>\n\n\n\n<figure><span data-svq-align=\"\"><img data-src=\"https://system.camp/wp-content/uploads/2020/10/retention-smile-flat.png\" data-height=\"600\" data-width=\"1014\" alt=\"\" src=\"https://system.camp/wp-content/uploads/2020/10/retention-smile-flat.png\"><span></span></span><figcaption>Retention Graph. Source: https://www.sequoiacap.com/article/retention</figcaption></figure>\n\n\n\n<p>You can use Retention curve graph to find if you Product Market Fit. I should probably write an article on understanding retention rates.</p>\n\n\n\n<p>In any case we assume the overall retention assuming you has a smile graph and your long tail users are sticking around and using the app. For this use case this is what the financial model will look like.</p>\n\n\n\n<p>Let\u2019s look an app like FlipBoard:</p>\n\n\n\n<h3 id=\"assumptions--1\">Assumptions:</h3>\n\n\n\n<p><strong>Retention Rate:</strong> 15%<br><strong>Average Earning from Ads/user/month:</strong> $0.5<br><strong>CPA:</strong> $0.5/-</p>\n\n\n\n<p>To get 1000 users we spend<br>Number of Users * CPA = $500</p>\n\n\n\n<p>Monthly Active Users<br>Total Users * Retention Rate<br>1000 * 15% = 150 MAU</p>\n\n\n\n<p>Ads earning from MAU<br>MAU * Average Earning From Ads Per User<br>150 * $0.5 = $75</p>\n\n\n\n<p>Yearly Ads earning<br>75 * 12 = 900</p>\n\n\n\n<p>Profits<br>$900 \u2013 $500 = $400</p>\n\n\n\n<p><strong>Earning per user overall: </strong>$0.4 per year</p>\n\n\n\n<p>Here we reduced the cost of customer acquisition to make sense of the model to $0.5. Unless you are able to achievea much lower CPA, advertising model will not work. Also ads based models only work for large number of user.</p>\n\n\n\n<p>FlipBoard has around 145 million Monthly Active Users. So putting these numbers into the above financial model we get their yearly revenue to be around:</p>\n\n\n\n<p>MAU * Average Earning from Ads/User * Months In Year</p>\n\n\n\n<p>145 million MAU * $0.5 * 12 = $870 Million</p>\n\n\n\n<p>The above model should be taken with a grain of Salt. This is a good model for \u201cpredicting\u201d the possible outcome and usually at really large scale it depends on the business on how they chose to monetize.</p>\n\n\n\n<p>Usually mobile apps with such large number do direct deals with advertisers and are able to increase their annual earning. Flipboard doesn\u2019t have an advertising platform like Facebook and deals with large advertisers/big companies directly.</p>\n\n\n\n<p>In some cases the goal is not just to increase the annual ads expenditure but maintain a consistent influx of ads revenue. This is a model followed by FlipBoard. They usually do month long or year long deals with Big Brands to have a consistent cash flow.</p>\n\n\n\n<p>The other big factor that defines how much you can charge for ads is the type of users you have. For apps like tiktok, most users fall in younger category and hence ads targeted at younger audience. These users have a lower monthly earning and are not that attractive to advertisers unless they can reach a really large number of users.</p>\n\n\n\n<p>LinkedIn can charge more for its ads as the users using the platform are mostly professionals. It is very hard to find professionals to advertise to on the Internet. This is what Microsoft saw when they acquired LinkedIn for $26.2 Billion.</p>\n\n\n\n<h2 id=\"how-to-use-this-model\">How to use this Model</h2>\n\n\n\n<p>So this was a guide on creating a Financial model for mobile apps. To use the above strategy to provide a more realistic model try to make your assumptions based on real world data.</p>\n\n\n\n<p>Before starting you should ask these questions:</p>\n\n\n\n<ul><li>Are user willing to pay for your service?</li><li>How much are they willing to pay?</li><li>How many users are there that you can realistically reach?</li></ul>\n\n\n\n<p>If you have answers to these problems that you can create a much more realistic model. It is very easy to validate your idea even before starting. Find your potential paying customers and ask them if they would want a service like this and they are willing to pay for this.</p>\n\n\n\n<p>Hope you like this guide and I hope it provides a framework for your startup. Wish you all the best.</p>\n\n\n\n<p>Let me know in the comments what you think.</p>\n\n\t\n\n<div>\n    <div>\n        <div>\n            <p><a rel=\"author\" href=\"https://system.camp/profile/shashank/\">\n\t\t\t\t\t<img loading=\"lazy\" width=\"80\" height=\"80\" srcset=\"https://secure.gravatar.com/avatar/33563973b6f338002e574f30a3f94788?s=160&d=mm&r=g 2x\" src=\"https://secure.gravatar.com/avatar/33563973b6f338002e574f30a3f94788?s=80&d=mm&r=g\" alt=\"\">                </a>\n            </p>\n            <p><span>\n                    \n                </span>\n                <span>\n                    <span>Member since</span>\n                     <time datetime=\"2020-10-05 07:14\">\n                        October 6, 2020                     </time>\n                </span>\n            </p>\n\n\t\t\t\n\t    \n\n\t\t\t\n        </div>\n\t\t    </div>\n\n\t\n\t</div>\n</div></div>",
  "images": [
    "https://system.camp/wp-content/uploads/2020/10/Calling-WordPress-REST-APIs-to-create-users-articles-posts-etc.-with-examples-using-JAVA.png",
    "https://system.camp/wp-content/uploads/2020/10/Batch-load-objects-using-dynamoDBMapper.png",
    "https://system.camp/wp-content/uploads/2019/09/onboard_image_06.min_.png",
    "https://system.camp/wp-content/uploads/2020/10/Calling-WordPress-REST-APIs-to-create-users-articles-posts-etc.-with-examples-using-JAVA-150x150.png",
    "https://system.camp/wp-content/uploads/2020/10/Teal-Autumn-Leaves-Facebook-Cover-150x150.png",
    "https://system.camp/wp-content/plugins/front-user-profile/assets/img/cat-placeholder.png",
    "https://system.camp/wp-content/uploads/2020/10/Batch-load-objects-using-dynamoDBMapper-150x150.png",
    "https://system.camp/wp-content/uploads/2020/10/Understanding-Financial-Model-using-KPIs-for-mobile-apps-A-definitive-Guide-1-1024x390.jpg",
    "https://system.camp/wp-content/uploads/2020/10/Ocean-Beach-Wedding-Facebook-Cover.png",
    "https://system.camp/wp-content/uploads/2020/10/How-to-parse-Google-Search-result-in-Java-150x150.jpg",
    "https://system.camp/wp-content/themes/typer/assets/img/placeholder.png",
    "https://secure.gravatar.com/avatar/33563973b6f338002e574f30a3f94788?s=60&d=mm&r=g",
    "https://system.camp/wp-content/uploads/2020/10/The-simplest-way-to-sort-HapMap_String-Object_-in-JAVA.png",
    "https://system.camp/wp-content/uploads/2020/10/cropped-logo-1.png",
    "https://system.camp/wp-content/uploads/2020/10/Ocean-Beach-Wedding-Facebook-Cover-150x150.png",
    "https://system.camp/wp-content/uploads/2020/10/retention-smile-flat.png",
    "https://system.camp/wp-content/uploads/2019/09/onboard_image_07.min_.png",
    "https://system.camp/wp-content/uploads/2020/10/How-to-parse-Google-Search-result-in-Java.jpg",
    "https://system.camp/wp-content/uploads/2020/10/retention_smile_curve.png",
    "https://system.camp/wp-content/uploads/2020/10/Teal-Autumn-Leaves-Facebook-Cover.png",
    "https://secure.gravatar.com/avatar/33563973b6f338002e574f30a3f94788?s=80&d=mm&r=g",
    "https://system.camp/wp-content/uploads/2019/09/onboard_image_05.min_.png",
    "https://system.camp/wp-content/uploads/2020/10/The-simplest-way-to-sort-HapMap_String-Object_-in-JAVA-150x150.png",
    "https://system.camp/wp-content/uploads/2020/10/Understanding-Financial-Model-using-KPIs-for-mobile-apps-A-definitive-Guide-1.jpg"
  ],
  "keywords": [
    "ads",
    "app",
    "apps",
    "earning",
    "financial",
    "kpis",
    "mobile",
    "model",
    "pay",
    "rate",
    "retention",
    "understanding",
    "users",
    "willing"
  ],
  "language": "en",
  "mainImage": "https://system.camp/wp-content/uploads/2020/10/Understanding-Financial-Model-using-KPIs-for-mobile-apps-A-definitive-Guide-1-1024x390.jpg",
  "predictedCategories": [
    "machine-learning",
    "money",
    "data-science"
  ],
  "publishedAt": null,
  "summary": "The first rule of KPIs is that they need to be the \u201ckey indicators\u201d of your business model and should always relate to the financial model directly.\nSo in second month if 300 users have opened the app then you retention rate is defined as :300/1000 = 30%So the retention rate is 30%.\nMobile apps like Facebook, Instagram & watsapp have a retention rate of over 70% hence they have these insane valuations.\nConversion Rate:This metric is useful for mobile apps that offer a Subscription Model.\nHow to use this ModelSo this was a guide on creating a Financial model for mobile apps.",
  "tags": [
    "Apps"
  ],
  "text": "KPIs are the ultimate indicator for how well you Mobile app is doing. KPI stands for Key Performance Indicator. The first rule of KPIs is that they need to be the \u201ckey indicators\u201d of your business model and should always relate to the financial model directly. Why? Because you want to quantify and improve these KPIs to help your company earn more money or get more users. The main goal of any app is either to make the sure users are using the app frequently or they are paying frequently. Hence it is hard to make a lot of profits from Utility apps like Calculators. These apps are really good but the overall usage is quite low and hence the Business Model will not make sense to build a company around these use-cases. Based on these metrics we can then define how much money the apps can make. Let\u2019s define the KPIs first: Retention Rate Retention Rate is the most important metric for a mobile app. This defines the \u201cpercentage\u201d of users that are still using the app after a certain time has passed. For example: If your app has 1000 user signed up. After one month you check how many users have opened the app in the second month. So in second month if 300 users have opened the app then you retention rate is defined as : 300/1000 = 30% So the retention rate is 30%. Industry average is around 15% to 60%. Mobile apps like Facebook, Instagram & watsapp have a retention rate of over 70% hence they have these insane valuations. Conversion Rate: This metric is useful for mobile apps that offer a Subscription Model. Conversion rate in this scenario means how many users are converting to paid subscribers. For example: From the 300 Monthly Active Users(MAU) if 60 pay and become your subscribers then your conversion rate can be calculated as: 300/60 = 20% Hence you have a 20% conversion rate. Industry average is around 10%. The Harvard Business Review found that even a 5% increase in retention could increase revenues by 25% to 95%. Cost Per Acquisition(CPA) Cost Per Acquisition is defined as the average cost to acquire a user. This cost must be averaged out over long periods spanning multiple months to get the big picture. The other KPI metrics will help you understand how valuable is your app. But CPA defines if your app makes sense in terms of a viable business. There are various channels for acquiring users: Ads (Google, Facebook, LinkedIn, Twitter, SnapChat, TikTok, Apple etc.) Influencer Marketing SEO/Blog Content Directory Listings (ProductHunt, BetaFy etc.) Community Forums Direct Advertising (Podcasts, Rent Websites etc.) And hundreds more Lets see how all these numbers fit in a Financial Model: The model is quite straightforward once you have the KPIs. In the below example we are looking at apps that earn from Subscription or Monthly Recurring Revenue. Now let\u2019s look at apps that earn from Subscription only. In this model we are also making an assumption that people are willing to pay for the service and a significant number of users exist that are willing to pay for the service you are providing. This is what truly a startup does, finds a service people are willing to pay for and make sure that there is a large number of users willing to pay for this service. If you have this then you have a great business that investors will be happy to invest in. We will look at an app like Pocket. Assumptions: Retention Rate: 30% Conversion Rate: 10% Monthly Subscription Cost: $4.99 CPA: $1/- To get 1000 users we spend Number of Users * CPA = 1000 USD Monthly Active Users Total Users * Retention Rate 1000 * 30% = 300 MAU Subscription earning from MAU MAU * Conversion Rate * Average Subscription Cost 300 * 10% * 4.99 = 149.7 Yearly Subscription earning 149.7 * 12 = 1796.4 Profits $1796.4 \u2013 $1000 = $794.4 So, if we spend 1000 dollars we earn a profit of 794.4. If you can scale this system to 100,000 users then your yearly profits become: $794,00.4. This is a pretty good & viable business model that investors will be ready to invest in to help you scale. Pocket had around 20 million users in 2015. So if we plug the numbers in the above model we get a yearly revenue from 20 million users at $15,888,000 (over 15 million dollars). Pocket app got acquired for around 30 million by Mozilla Foundation. SaaS usually have a retention rate of over 80%. Retention like this is required as the Cost Per Acquisition is pretty high. That\u2019s why you will see SaaS companies offering $10 to $1000 referral commission. Now let\u2019s look at apps that earn from Ads only. For advertising based apps the most important number is Total Users & Retention. In mass consumer apps finding the total retained users is a bit tricky. These apps only make sense if the Retention Curve is a \u201cSmile\u201d curve like this: Evernote Retention Curve. Source: https://www.sequoiacap.com/article/retention This below retention graph is an example of long tail retention. Where some percentage of your users choose to stick around for a longer time and hence make the financial model viable. Retention Graph. Source: https://www.sequoiacap.com/article/retention You can use Retention curve graph to find if you Product Market Fit. I should probably write an article on understanding retention rates. In any case we assume the overall retention assuming you has a smile graph and your long tail users are sticking around and using the app. For this use case this is what the financial model will look like. Let\u2019s look an app like FlipBoard: Assumptions: Retention Rate: 15% Average Earning from Ads/user/month: $0.5 CPA: $0.5/- To get 1000 users we spend Number of Users * CPA = $500 Monthly Active Users Total Users * Retention Rate 1000 * 15% = 150 MAU Ads earning from MAU MAU * Average Earning From Ads Per User 150 * $0.5 = $75 Yearly Ads earning 75 * 12 = 900 Profits $900 \u2013 $500 = $400 Earning per user overall: $0.4 per year Here we reduced the cost of customer acquisition to make sense of the model to $0.5. Unless you are able to achievea much lower CPA, advertising model will not work. Also ads based models only work for large number of user. FlipBoard has around 145 million Monthly Active Users. So putting these numbers into the above financial model we get their yearly revenue to be around: MAU * Average Earning from Ads/User * Months In Year 145 million MAU * $0.5 * 12 = $870 Million The above model should be taken with a grain of Salt. This is a good model for \u201cpredicting\u201d the possible outcome and usually at really large scale it depends on the business on how they chose to monetize. Usually mobile apps with such large number do direct deals with advertisers and are able to increase their annual earning. Flipboard doesn\u2019t have an advertising platform like Facebook and deals with large advertisers/big companies directly. In some cases the goal is not just to increase the annual ads expenditure but maintain a consistent influx of ads revenue. This is a model followed by FlipBoard. They usually do month long or year long deals with Big Brands to have a consistent cash flow. The other big factor that defines how much you can charge for ads is the type of users you have. For apps like tiktok, most users fall in younger category and hence ads targeted at younger audience. These users have a lower monthly earning and are not that attractive to advertisers unless they can reach a really large number of users. LinkedIn can charge more for its ads as the users using the platform are mostly professionals. It is very hard to find professionals to advertise to on the Internet. This is what Microsoft saw when they acquired LinkedIn for $26.2 Billion. How to use this Model So this was a guide on creating a Financial model for mobile apps. To use the above strategy to provide a more realistic model try to make your assumptions based on real world data. Before starting you should ask these questions: Are user willing to pay for your service? How much are they willing to pay? How many users are there that you can realistically reach? If you have answers to these problems that you can create a much more realistic model. It is very easy to validate your idea even before starting. Find your potential paying customers and ask them if they would want a service like this and they are willing to pay for this. Hope you like this guide and I hope it provides a framework for your startup. Wish you all the best. Let me know in the comments what you think. Member since October 6, 2020",
  "title": "Understanding Financial Model and KPIs for mobile apps",
  "url": "https://system.camp/startups/understanding-kpis-for-mobile-apps-and-how-to-measure-kpis/"
}

Embed.ly like generated code

Our goal is to create a card that looks like the one generated by embed.ly. Below is the embed code generated by embedly’s code generator: https://embed.ly/code?url=https%3A%2F%2Fsystem.camp%2Fstartups%2Funderstanding-kpis-for-mobile-apps-and-how-to-measure-kpis%2F . You can generate the code for any URL.

Generated code:

<blockquote class="embedly-card"><h4><a href="https://system.camp/startups/understanding-kpis-for-mobile-apps-and-how-to-measure-kpis/">Understanding Financial Model and KPIs for mobile apps - A definitive Guide - System.Camp</a></h4><p>KPIs are the ultimate indicator for how well you Mobile app is doing. KPI stands for Key Performance Indicator. The first rule of KPIs is that they need to be the "key indicators" of your business model and should always relate to the financial model directly. Why?</p></blockquote>
<script async src="//cdn.embedly.com/widgets/platform.js" charset="UTF-8"></script>

This is how the rendered HTML from embed.ly looks like

Understanding Financial Model and KPIs for mobile apps – A definitive Guide – System.Camp

KPIs are the ultimate indicator for how well you Mobile app is doing. KPI stands for Key Performance Indicator. The first rule of KPIs is that they need to be the “key indicators” of your business model and should always relate to the financial model directly. Why?


Using Bootstrap Cards Component

Bootstrap is an amazing library that provides various kinds of cards that are responsive and looks very pretty. You can see the various types of Bootstrap cards here: https://getbootstrap.com/docs/4.0/components/card/

This is the code we want to generate using PHP

<div class="card" style="max-width: 600px; padding: 0px; position: relative; min-width: 200px; margin: 5px auto;">
        <img class="h-auto d-inline-block" src="https://system.camp/wp-content/uploads/2020/10/Understanding-Financial-Model-using-KPIs-for-mobile-apps-A-definitive-Guide-1-1024x390.jpg" alt="Understanding Financial Model and KPIs for mobile apps">
        <div class="card-body">
            <h5 class="card-title">Understanding Financial Model and KPIs for mobile apps</h5>
            <p class="card-text">How to create a financial model for a mobile app? How to measure KPIs? What are KPIs? Learn all this and more...</p>
            <a href="https://system.camp/startups/understanding-kpis-for-mobile-apps-and-how-to-measure-kpis/" target="_blank" class="btn btn-primary">Read more...</a>
        </div>
    </div>

Putting it all together

From the returned article extract response we will be using these fields

  • Title
  • Description
  • Main Image
  • Url

For Bootstrap to work you will need to import Bootstrap’s css, js and also jquery in your headers. Most websites will have this already imported in their header so please check. If not then please add these imports to your headers.

<script src="https://code.jquery.com/jquery-3.5.1.min.js" integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0="crossorigin="anonymous"></script>
<!-- Latest compiled and minified CSS --><link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" crossorigin="anonymous"><!-- Latest compiled and minified JavaScript --><script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/js/bootstrap.min.js" integrity="sha384-B4gt1jrGC7Jh4AgTPSdUtOBvfO8shuf57BaghqFfPlYxofvL8/KUEfYiJOMMV+rV" crossorigin="anonymous"></script>

Let’s put this all together in our PHP script:

<!doctype html>
<html lang="en">
<head>
    <script
            src="https://code.jquery.com/jquery-3.5.1.min.js"
            integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0="
            crossorigin="anonymous"></script> <!-- Latest compiled and minified CSS --> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous"> <!-- Latest compiled and minified JavaScript --> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
</head><body><?php
$curl = curl_init();
curl_setopt_array($curl, array( CURLOPT_URL => "https://api.promptapi.com/pipfeed", CURLOPT_HTTPHEADER => array( "Content-Type: text/plain", "apikey: jQHpezlBNvwdjW41WcExce6mcn5yz8sW" ), CURLOPT_RETURNTRANSFER => true, CURLOPT_ENCODING => "", CURLOPT_MAXREDIRS => 10, CURLOPT_TIMEOUT => 0, CURLOPT_FOLLOWLOCATION => true, CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1, CURLOPT_CUSTOMREQUEST => "POST", CURLOPT_POSTFIELDS => "https://system.camp/startups/understanding-kpis-for-mobile-apps-and-how-to-measure-kpis/"
));
$response = curl_exec($curl);
curl_close($curl);
$extractedArticle = json_decode($response);
//var_dump($extractedArticle);
//var_dump($extractedArticle->title);
?><div class="container"> <div class="card" style="max-width: 600px; padding: 0px; position: relative; min-width: 200px; margin: 5px auto;"> <img class="h-auto d-inline-block" src="<?php echo $extractedArticle->mainImage ?>" alt="<?php echo $extractedArticle->title ?>"> <div class="card-body"> <h5 class="card-title"><?php echo $extractedArticle->title ?></h5> <p class="card-text"><?php echo $extractedArticle->description ?></p> <a href="<?php echo $extractedArticle->url ?>" target="_blank" class="btn btn-primary">Read more...</a> </div> </div></div></body>

This is how the rendered HTML looks like

Understanding Financial Model and KPIs for mobile apps
Understanding Financial Model and KPIs for mobile apps

How to create a financial model for a mobile app? How to measure KPIs? What are KPIs? Learn all this and more…

Read more…

You can customize the generated embedding code based on your preferences. For example, you can use the “Summary” or show the first 4 lines from the article content or use all images and show them as a carousel. You can customize the embeddings the way you prefer.

Let me know what you think of the tutorial in the comments below.

Leave a Reply

Your email address will not be published.

[Tutorial] Extract full news article content from any RSS feed using Extract API

Learn how to extract all fields from any RSS feed or given a list of URLs. For this example, we will be using Medium’s RSS feed. The code will be in python but can easily be adapted for other languages.

Lets start by importing the packages. We will be using “feedparser” to extract Medium Rss feed.

pip install feedparser
pip install requests

Let’s begin by first extracting links from the RSS feed. For this example, we will be extracting the articles from “Towards Data Science”. “Towards Data Science” is one of the leading blogs when it comes to Data Science, Machine Learning & Artificial Intelligence.

import feedparser

NewsFeed = feedparser.parse("https://towardsdatascience.com/feed")
print("Total entries found in feed: "+ str(len(NewsFeed.entries)) +"\n")
i =0
for entry in NewsFeed.entries:
print(str(i) + ": Got url: " + entry.link)
i = i +1

We are able to extract the links, now we want to extract the entire content, summary, metadata and other details for each news article in the feed.

To extract we will be using Pipfeed’s extract API: https://promptapi.com/marketplace/description/pipfeed-api You can get a free API key from prompt API.

import requests

url = "https://api.promptapi.com/pipfeed"

payload = "https://towardsdatascience.com/topic-model-evaluation-3c43e2308526"
headers= {
  "apikey": "YOUR_API_KEY"
}

response = requests.request("POST", url, headers=headers, data = payload)

status_code = response.status_code
result = response.text
print(result)

The above code will extract the given URL and return all the fields. Below is the response we get for the above code. DO NOT forget to replace the API key with your own API keys generated from prompt API.

“Summary” & “predictedCategories” are generated using Pipfeed’s AI models. Rest of the fields are extracted from the article HTML itself.

{
"publishedAt": "2020-11-09T05:15:23.001Z",
"title": "Topic Model Evaluation",
"authors": [
"Giri Rabindranath"
],
"description": "Evaluation is the key to understanding topic models - This article explains what topic model evaluation is, why it's important and how to do it",
"language": "en",
"url": "https://towardsdatascience.com/topic-model-evaluation-3c43e2308526",
"mainImage": "https://miro.medium.com/max/1200/1*wvlqQPpOHFK7xQ1XOhe6xg.jpeg",
"category": "machine-learning",
"categories": null,
"predictedCategories": [
"machine-learning",
"data-science",
"programming"
],
"tags": [],
"keywords": [
"coherence",
"evaluation",
"human",
"model",
"models",
"topic",
"topics",
"way",
"word",
"words"
],
"summary": "In this article, we\u2019ll look at topic model evaluation, what it is and how to do it.\nWhat is topic model evaluation?\nTopic model evaluation is the process of assessing how well a topic model does what it is designed for.\nThis is why topic model evaluation matters.\nHow to evaluate topic models \u2014 RecapThis article has hopefully made one thing clear \u2014 topic model evaluation isn\u2019t easy!",
"images": [
"https://miro.medium.com/fit/c/140/140/1*74Yrxu8s4sOtTECtixv9Fg.jpeg",
"https://miro.medium.com/max/60/1*[email protected]?q=20",
"https://miro.medium.com/fit/c/140/140/0*l_zfjU9IKMa47tfy",
"https://miro.medium.com/fit/c/56/56/2*b2y5uCYazQ9FgiUQEUHT6Q.jpeg",
"https://miro.medium.com/max/60/1*mpyrgqwMjfclV2oN1U2VIA.jpeg?q=20",
"https://miro.medium.com/max/698/1*E4oPMmq5jTKuStZJuyDGpw.jpeg",
"https://miro.medium.com/max/12032/1*wvlqQPpOHFK7xQ1XOhe6xg.jpeg",
"https://miro.medium.com/max/60/1*_MXaw5BKgIsm8J3dOUNHMg.jpeg?q=20",
"https://miro.medium.com/max/224/1*AGyTPCaRzVqL77kFwUwHKg.png",
"https://miro.medium.com/max/270/1*W_RAPQ62h0em559zluJLdQ.png",
"https://miro.medium.com/max/60/1*E4oPMmq5jTKuStZJuyDGpw.jpeg?q=20",
"https://miro.medium.com/max/1200/1*wvlqQPpOHFK7xQ1XOhe6xg.jpeg",
"https://miro.medium.com/max/60/0*aP8H1qpRN_OR1x5r?q=20",
"https://miro.medium.com/max/60/0*NIpOoYo9iHt4lMbg?q=20",
"https://miro.medium.com/max/60/0*l_zfjU9IKMa47tfy?q=20",
"https://miro.medium.com/max/270/1*Crl55Tm6yDNMoucPo1tvDg.png",
"https://miro.medium.com/max/784/1*_MXaw5BKgIsm8J3dOUNHMg.jpeg",
"https://miro.medium.com/fit/c/140/140/1*FTG-junI6KJzojC_xRVNXg.png",
"https://miro.medium.com/max/60/0*fG5RLd48iOZezB_y.jpeg?q=20",
"https://miro.medium.com/fit/c/140/140/0*NIpOoYo9iHt4lMbg",
"https://miro.medium.com/fit/c/140/140/1*[email protected]",
"https://miro.medium.com/max/60/1*wvlqQPpOHFK7xQ1XOhe6xg.jpeg?q=20",
"https://miro.medium.com/fit/c/140/140/1*mpyrgqwMjfclV2oN1U2VIA.jpeg",
"https://miro.medium.com/fit/c/140/140/0*fG5RLd48iOZezB_y.jpeg",
"https://miro.medium.com/fit/c/140/140/0*aP8H1qpRN_OR1x5r",
"https://miro.medium.com/max/60/1*74Yrxu8s4sOtTECtixv9Fg.jpeg?q=20",
"https://miro.medium.com/max/60/1*FTG-junI6KJzojC_xRVNXg.png?q=20"
],
"blogName": null,
"blogLogoUrl": null,
"html": "<div class=\"page\" id=\"readability-page-1\"><section><div><div><h2 id=\"ef6b\">DATA SCIENCE EXPLAINED</h2><h2 id=\"e375\">Here\u2019s what you need to know about evaluating topic models</h2><div><div><div><div><a rel=\"noopener\" href=\"https://medium.com/@g_rabi?source=post_page-----3c43e2308526--------------------------------\"><div><p><img height=\"28\" width=\"28\" src=\"https://miro.medium.com/fit/c/56/56/2*b2y5uCYazQ9FgiUQEUHT6Q.jpeg\" alt=\"Giri Rabindranath\"></p></div></a></div></div></div></div></div></div><div><p id=\"8bff\"><em>Topic models are widely used for analyzing unstructured text data, but they provide no guidance on the quality of topics produced. Evaluation is the key to understanding topic models. In this article, we\u2019ll look at what topic model evaluation is, why it\u2019s important and how to do it.</em></p></div></section><section><div><div><h2 id=\"324c\">Contents</h2><ul><li id=\"dd12\"><a rel=\"noopener\" href=\"#f0ce\"><em>What is topic model evaluation</em></a>?</li><li id=\"ceba\"><a rel=\"noopener\" href=\"#d1ae\"><em>How to evaluate topic models</em></a></li><li id=\"ea5d\"><a rel=\"noopener\" href=\"#2932\"><em>Evaluating topic models \u2014 Human judgment</em></a></li><li id=\"6275\"><a rel=\"noopener\" href=\"#9b50\"><em>Evaluating topic models \u2014 Quantitative metrics</em></a></li><li id=\"ea38\"><a rel=\"noopener\" href=\"#19ff\"><em>Calculating coherence using Gensim in Python</em></a></li><li id=\"95a3\"><a rel=\"noopener\" href=\"#1756\"><em>Limitations of coherence</em></a></li><li id=\"251a\"><a rel=\"noopener\" href=\"#63c4\"><em>How to evaluate topic models \u2014 Recap</em></a></li><li id=\"e448\"><a rel=\"noopener\" href=\"#31aa\"><em>Conclusion</em></a></li></ul><p id=\"6f84\">Topic modeling is a branch of <a rel=\"noopener nofollow\" href=\"https://highdemandskills.com/natural-language-processing-explained-simply/\">natural language processing</a> that\u2019s used for exploring text data. It works by identifying key themes \u2014 or topics \u2014 based on the words or phrases in the data that have a similar meaning. Its versatility and ease-of-use have led to a variety of applications.</p><p id=\"3772\">Be<span id=\"rmm\">i</span>ng a form of unsupervised learning, topic modeling is useful when annotated or labeled data isn\u2019t available. This is helpful, as the majority of emerging text data isn\u2019t labeled, and labeling is time-consuming and expensive to do.</p><p id=\"030c\">For an easy-to-follow, intuitive explanation of topic modeling and its applications, see <a rel=\"noopener nofollow\" href=\"https://highdemandskills.com/topic-modeling-intuitive/\">this article</a>.</p><p id=\"fb4a\">One of the shortcomings of topic modeling is that there\u2019s no guidance about the quality of topics produced. If you want to learn about how meaningful the topics are, you\u2019ll need to evaluate the topic model.</p><p id=\"b937\">In this article, we\u2019ll look at topic model evaluation, what it is and how to do it. It\u2019s an important part of the topic modeling process that sometimes gets overlooked. For a topic model to be truly useful, some sort of evaluation is needed to understand how relevant the topics are for the purpose of the model.</p><p id=\"b85d\">Topic model evaluation is the process of assessing how well a topic model does what it is designed for.</p><p id=\"44ee\">When you run a topic model, you usually do it with a specific purpose in mind. It may be for document classification, to explore a set of unstructured texts, or some other analysis. As with any model, if you wish to know how effective it is at doing what it\u2019s designed for, you\u2019ll need to evaluate it. This is why topic model evaluation matters.</p><p id=\"e9c9\">Evaluating a topic model can help you decide if the model has captured the internal structure of a corpus (a collection of text documents). This can be particularly useful in tasks like e-discovery, where the effectiveness of a topic model can have implications for legal proceedings or other important matters.</p><p id=\"a51a\">More generally, topic model evaluation can help you answer questions like:</p><ul><li id=\"b7ef\">Are the identified topics understandable?</li><li id=\"1d2d\">Are the topics coherent?</li><li id=\"325e\">Does the topic model serve the purpose it is being used for?</li></ul><p id=\"da03\">Without some form of evaluation, you won\u2019t know how well your topic model is performing or if it\u2019s being used properly.</p><p id=\"c559\">Evaluating a topic model isn\u2019t always easy, however.</p><p id=\"3adc\">If a topic model is used for a measurable task, such as classification, then its effectiveness is relatively straightforward to calculate (eg. measure the proportion of successful classifications). But if the model is used for a more qualitative task, such as exploring the semantic themes in an unstructured corpus, then evaluation is more difficult.</p><p id=\"ff58\">In this article, we\u2019ll focus on evaluating topic models that do not have clearly measurable outcomes. These include topic models used for document exploration, content recommendation and e-discovery, amongst other use cases.</p><p id=\"dc38\">Evaluating these types of topic models seeks to understand how easy it is for humans to interpret the topics produced by the model. Put another way, topic model evaluation is about the \u2018human interpretability\u2019 or \u2018semantic interpretability\u2019 of topics.</p><p id=\"48f0\">There are a number of ways to evaluate topic models. These include:</p><p id=\"fb66\"><em>Human judgment</em></p><ul><li id=\"422a\">Observation-based, eg. observing the top \u2019N\u2019 words in a topic</li><li id=\"cee6\">Interpretation-based, eg. \u2018word intrusion\u2019 and \u2018topic intrusion\u2019 to identify the words or topics that \u201cdon\u2019t belong\u201d in a topic or document</li></ul><p id=\"ce7c\"><em>Quantitative metrics</em> \u2014 Perplexity (held out likelihood) and coherence calculations</p><p id=\"8c62\"><em>Mixed approaches</em> \u2014 Combinations of judgment-based and quantitative approaches</p><p id=\"6bd7\">Let\u2019s look at a few of these more closely.</p><h2 id=\"f1c4\">Observation-based approaches</h2><p id=\"4ea9\">The easiest way to evaluate a topic is to look at the most probable words in the topic. This can be done in a tabular form, for instance by listing the top 10 words in each topic, or in other formats.</p><p id=\"939a\">One visually appealing way to observe the probable words in a topic is through Word Clouds.</p><p id=\"9585\">To illustrate, the following example is a Word Cloud based on topics modeled from the minutes of US Federal Open Market Committee (FOMC) meetings. The FOMC is an important part of the US financial system and meets 8 times per year. The following Word Cloud is based on a topic that emerged from an analysis of topic trends in FOMC meetings over 2007 to 2020.</p><figure><a href=\"https://highdemandskills.com/topic-trends-fomc/\"><div><div><p><img data-old-src=\"https://miro.medium.com/max/60/1*E4oPMmq5jTKuStZJuyDGpw.jpeg?q=20\" sizes=\"349px\" srcset=\"https://miro.medium.com/max/552/1*E4oPMmq5jTKuStZJuyDGpw.jpeg 276w, https://miro.medium.com/max/698/1*E4oPMmq5jTKuStZJuyDGpw.jpeg 349w\" height=\"181\" width=\"349\" src=\"https://miro.medium.com/max/698/1*E4oPMmq5jTKuStZJuyDGpw.jpeg\" alt=\"Image for post\"></p></div></div></a><figcaption>Word Cloud of \u201cinflation\u201d topic. Image by Author.</figcaption></figure><p id=\"b025\">Topic modeling doesn\u2019t provide guidance on the meaning of any topic, so labeling a topic requires human interpretation. In this case, based on the most probable words displayed in the Word Cloud, the topic appears to be about \u201cinflation\u201d.</p><p id=\"ead0\">You can see more Word Clouds from the FOMC topic modeling example <a rel=\"noopener nofollow\" href=\"https://highdemandskills.com/topic-trends-fomc/#h4-interpret-topics\">here</a>.</p><p id=\"d586\">Beyond observing the most probable words in a topic, a more comprehensive observation-based approach called \u2018Termite\u2019 has been <a rel=\"noopener nofollow\" href=\"http://vis.stanford.edu/files/2012-Termite-AVI.pdf\">developed by Stanford University researchers</a>.</p><p id=\"a78d\">Termite is described as \u201c<em>a visualization of the term-topic distributions produced by topic models\u201d </em>[1]. In this description, \u2018term\u2019 refers to a \u2018word\u2019, so \u2018term-topic distributions\u2019 are \u2018word-topic distributions\u2019.</p><p id=\"a348\">Termite produces meaningful visualizations by introducing two calculations:</p><ol><li id=\"ddbe\">A \u2018saliency\u2019 measure, which identifies words that are more relevant for the topics in which they appear (beyond mere frequencies of their counts)</li><li id=\"246c\">A \u2018seriation\u2019 method, for sorting words into more coherent groupings based on the degree of semantic similarity between them</li></ol><p id=\"d552\">Termite produces graphs which summarize words and topics based on saliency and seriation. This helps to identify more interpretable topics and leads to better topic model evaluation.</p><p id=\"5bce\">You can see example Termite visualizations <a rel=\"noopener nofollow\" href=\"http://vis.stanford.edu/topic-diagnostics/\">here</a>.</p><h2 id=\"1281\">Interpretation-based approaches</h2><p id=\"ccc6\">Interpretation-based approaches take more effort than observation-based approaches but produce better results. These approaches are considered a \u2018gold standard\u2019 for evaluating topic models since they use human judgment to maximum effect.</p><p id=\"eff2\">A good illustration of these is described in a <a rel=\"noopener nofollow\" href=\"http://users.umiacs.umd.edu/~jbg/docs/nips2009-rtl.pdf\">research paper</a> by Jonathan Chang and others (2009) [2] which developed \u2018word intrusion\u2019 and \u2018topic intrusion\u2019 to help evaluate semantic coherence.</p><p id=\"e268\"><strong>Word intrusion</strong></p><p id=\"cd41\">In word intrusion, subjects are presented with groups of 6 words, 5 of which belong to a given topic and one which does not \u2014 the \u2018intruder\u2019 word. Subjects are asked to identify the intruder word.</p><p id=\"3489\">To understand how this works, consider the group of words:</p><p id=\"7c02\">[ <em>dog, cat, horse, apple, pig, cow </em>]</p><p id=\"e26b\">Can you spot the intruder?</p><p id=\"0e95\">Most subjects pick \u2018apple\u2019 because it looks different to the others (all of which are animals, suggesting an animal-related topic for the others).</p><p id=\"294d\">Now, consider:</p><p id=\"3370\">[ <em>car, teacher, platypus, agile, blue, Zaire </em>]</p><p id=\"2ebb\">Which is the intruder in this group of words?</p><p id=\"fed9\">It\u2019s much harder to identify, so most subjects choose the intruder at random. This implies poor topic coherence.</p><p id=\"3b59\"><strong>Topic intrusion</strong></p><p id=\"1ee8\">Similar to word intrusion, in topic intrusion subjects are asked to identify the \u2018intruder\u2019 topic from groups of topics that make up documents.</p><p id=\"2b99\">In this task, subjects are shown a title and a snippet from a document along with 4 topics. Three of the topics have a high probability of belonging to the document while the remaining topic has a low probability \u2014 the \u2018intruder\u2019 topic.</p><p id=\"41ed\">As for word intrusion, the intruder topic is sometimes easy to identify and at other times not. The success with which subjects can correctly choose the intruder helps to determine the level of coherence.</p><p id=\"7489\">While evaluation methods based on human judgment can produce good results, they are costly and time-consuming to do.</p><p id=\"e193\">Moreover, human judgment isn\u2019t clearly defined and humans don\u2019t always agree on what makes a good topic. In contrast, the appeal of quantitative metrics is the ability to standardize, automate and scale the evaluation of topic models.</p><h2 id=\"2047\">Held out likelihood or perplexity</h2><p id=\"415d\">A traditional metric for evaluating topic models is the \u2018held out likelihood\u2019, also referred to as \u2018perplexity\u2019.</p><p id=\"4c08\">This is calculated by splitting a dataset into two parts \u2014 a training set and a test set. The idea is to train a topic model using the training set and then test the model on a test set which contains previously unseen documents (ie. held out documents). Likelihood is usually calculated as a logarithm, so this metric is sometimes referred to as the \u2018held out log-likelihood\u2019.</p><p id=\"c176\">The perplexity metric is a predictive one. It assesses a topic model\u2019s ability to predict a test set after having been trained on a training set. In practice, around 80% of a corpus may be set aside as a training set with the remaining 20% being a test set.</p><p id=\"4ffe\">Although the perplexity metric is a natural choice for topic models from a technical standpoint, it does not provide good results for human interpretation. This was demonstrated by research, again by Jonathan Chang and others (2009), which found that perplexity did not do a good job of conveying whether topics are coherent or not.</p><p id=\"bf74\">When comparing perplexity against human judgment approaches like word intrusion and topic intrusion, the research showed a negative correlation. This means that as the perplexity score improves (ie. the held out log-likelihood is higher), the human interpretability of topics and topic mixes get worse (rather than better). The perplexity metric therefore appears to be misleading when it comes to the human understanding of topics and topic mixes.</p><p id=\"ad6d\">Are there better quantitative metrics than perplexity for evaluating topic models?</p><h2 id=\"6e3f\">Coherence</h2><p id=\"a14e\">One of the shortcomings of perplexity is that it does not capture context, ie. perplexity does not capture the relationship between words in a topic or topics in a document. The idea of semantic context is important for human understanding.</p><p id=\"3175\">To overcome this, approaches have been developed that attempt to capture context between words in a topic. They use measures such as the conditional likelihood (rather than the log-likelihood) of the co-occurrence of words in a topic. These approaches are collectively referred to as \u2018coherence\u2019.</p><p id=\"6ad7\">There\u2019s been a lot of research on coherence over recent years and as a result there are a variety of methods available. A useful way to deal with this is to set up a framework that allows you to choose the methods that you prefer.</p><p id=\"ebf8\">Such a framework has been proposed by researchers at <a rel=\"noopener nofollow\" href=\"http://aksw.org/About.html\">AKSW</a>. Using this <a rel=\"noopener nofollow\" href=\"http://svn.aksw.org/papers/2015/WSDM_Topic_Evaluation/public.pdf\">framework</a>, which we\u2019ll call the \u201ccoherence pipeline\u201d, you can calculate coherence in a way that works best for your circumstances (eg. based on availability of a corpus, speed of computation etc).</p><p id=\"06ae\">The coherence pipeline offers a versatile way to calculate coherence. It is also what Gensim, a popular package for topic modeling in Python, uses for implementing coherence (more on this later).</p><p id=\"94ae\">The coherence pipeline is made up of four stages:</p><ol><li id=\"acc4\">Segmentation</li><li id=\"1668\">Probability estimation</li><li id=\"75ce\">Confirmation</li><li id=\"0982\">Aggregation</li></ol><p id=\"4e6c\">These four stages form the basis of coherence calculations and work as follows:</p><p id=\"91d7\"><strong>Segmentation</strong> sets up the word groupings that are used for pair-wise comparisons.</p><p id=\"b612\">Let\u2019s say that we wish to calculate the coherence of a set of topics. Coherence calculations start by choosing words within each topic (usually the most frequently occurring words) and comparing them with each other, one pair at a time. Segmentation is the process of choosing how words are grouped together for these pair-wise comparisons.</p><p id=\"7439\">Word groupings can be made up of single words or larger groupings. For single words, each word in a topic is compared with each other word in the topic. For 2-word or 3-word groupings, each 2-word group is compared with each other 2-word group, or each 3-word group is compared with each other 3-word group, and so on.</p><p id=\"0de4\">Comparisons can also be made between groupings of different size, for instance single words can be compared with 2-word or 3-word groups.</p><p id=\"4a41\"><strong>Probability </strong>estimation refers to the type of probability measure that underpins the calculation of coherence. To illustrate, consider the two widely used coherence approaches of <em>UCI</em> and <em>UMass</em>:</p><ul><li id=\"6032\">UCI is based on point-wise mutual information (PMI) calculations. This is given by: <code><strong>PMI</strong>(wi,wj) = log[(<strong>P</strong>(wi,wj) + e) / <strong>P</strong>(wi).<strong>P</strong>(wj)]</code>, for words <code>wi</code> and <code>wj</code> and some small number <code>e</code>, and where <code><strong>P</strong>(wi)</code> is the probability of word <code>i</code> occurring in a topic and <code><strong>P</strong>(wi,wj)</code> is the probability of both words <code>i</code> and <code>j</code> appearing in a topic. Here, the probabilities are based on word co-occurrence counts.</li><li id=\"0083\">UMass caters for the order in which words appear and is based on the calculation of: <code>log[(<strong>P</strong>(wi,wj) + e) / <strong>P</strong>(wj)]</code>, with <code>wi</code>, <code>wj</code>, <code><strong>P</strong>(wi)</code> and <code><strong>P</strong>(wi,wj)</code> as for UCI. Here, the probabilities are conditional, since <code><strong>P</strong>(wi|wj) = [(<strong>P</strong>(wi,wj) / <strong>P</strong>(wj)]</code>, which we know from <a rel=\"noopener nofollow\" href=\"https://highdemandskills.com/bayes-theorem/\">Bayes\u2019 theorem</a>. So, this approach measures how much a common word appearing within a topic is a good predictor for a less common word in the topic.</li></ul><p id=\"f8a3\"><strong>Confirmation</strong> measures how strongly each word grouping in a topic relates to other word groupings (ie. how similar they are). There are direct and indirect ways of doing this, depending on the frequency and distribution of words in a topic.</p><p id=\"1c55\"><strong>Aggregation</strong> is the final step of the coherence pipeline. It\u2019s a summary calculation of the confirmation measures of all the word groupings, resulting in a single coherence score. This is usually done by averaging the confirmation measures using the mean or median. Other calculations may also be used, such as the harmonic mean, quadratic mean, minimum or maximum.</p><p id=\"93e5\">Coherence is a popular way to quantitatively evaluate topic models and has good coding implementations in languages such as Python (eg. Gensim).</p><p id=\"ddfb\">To see how coherence works in practice, let\u2019s look at an example.</p><p id=\"8445\">Gensim is a widely used package for topic modeling in Python. It uses <a rel=\"noopener nofollow\" href=\"https://highdemandskills.com/topic-modeling-intuitive/\">Latent Dirichlet Allocation</a> (LDA) for topic modeling and includes functionality for calculating the coherence of topic models.</p><p id=\"d3e9\">As mentioned, Gensim calculates coherence using the coherence pipeline, offering a range of options for users.</p><p id=\"1da5\">The following example uses Gensim to model topics for US company earnings calls. These are quarterly conference calls in which company management discusses financial performance and other updates with analysts, investors and the media. They are an important fixture in the US financial calendar.</p><p id=\"ad5b\">The following code calculates coherence for the trained topic model in the example:</p><figure><div></div><figcaption><a rel=\"noopener nofollow\" href=\"https://highdemandskills.com/topic-modeling-lda/\">Calculating the coherence score using Gensim</a></figcaption></figure><p id=\"c9d2\">The coherence method that was chosen in this example is \u201cc_v\u201d. This is one of several choices offered by Gensim. Other choices include UCI (\u201cc_uci\u201d) and UMass (\u201cu_mass\u201d).</p><p id=\"594d\">For more information about the Gensim package and the various choices that go with it, please refer to the <a rel=\"noopener nofollow\" href=\"https://radimrehurek.com/gensim/models/coherencemodel.html\">Gensim documentation</a>.</p><p id=\"3292\">Gensim can also be used to explore the effect of varying LDA parameters on a topic model\u2019s coherence score. This helps to select the best choice of parameters for the model. The following code shows how to calculate coherence for varying values of the alpha parameter in the LDA model:</p><figure><div></div><figcaption><a rel=\"noopener nofollow\" href=\"https://highdemandskills.com/topic-modeling-lda/\">Investigating coherence by varying the alpha parameter</a></figcaption></figure><p id=\"fe44\">The above code also produces a chart of the model\u2019s coherence score for different values of the alpha parameter:</p><figure><a href=\"https://highdemandskills.com/topic-modeling-lda/\"><div><div><p><img data-old-src=\"https://miro.medium.com/max/60/1*_MXaw5BKgIsm8J3dOUNHMg.jpeg?q=20\" sizes=\"392px\" srcset=\"https://miro.medium.com/max/552/1*_MXaw5BKgIsm8J3dOUNHMg.jpeg 276w, https://miro.medium.com/max/784/1*_MXaw5BKgIsm8J3dOUNHMg.jpeg 392w\" height=\"262\" width=\"392\" src=\"https://miro.medium.com/max/784/1*_MXaw5BKgIsm8J3dOUNHMg.jpeg\" alt=\"Image for post\"></p></div></div></a><figcaption>Topic model coherence for different values of the alpha parameter. Image by Author.</figcaption></figure><p id=\"ae19\">This helps in choosing the best value of alpha based on coherence scores.</p><p id=\"e6dc\">In practice, you would also want to check the effect of varying other model parameters on the coherence score. You can see how this was done in the US company earning call example <a rel=\"noopener nofollow\" href=\"https://highdemandskills.com/topic-modeling-lda/#h3-3\">here</a>.</p><p id=\"ecfa\">The overall choice of parameters would depend on balancing the varying effects on coherence, and also on judgment about the nature of the topics and the purpose of the model.</p><p id=\"098d\">Despite its usefulness, coherence has some important limitations.</p><p id=\"5a89\">According to <a rel=\"noopener nofollow\" href=\"https://www.linkedin.com/in/mattilyra/?originalSubdomain=de\">Matti Lyra</a>, a leading data scientist and researcher, the key limitations are:</p><ul><li id=\"83d2\"><strong>Variability</strong> \u2014 The aggregation step of the coherence pipeline is typically calculated over a large number of word-group pairs. While this produces a metric (eg. mean of the coherence scores), there\u2019s no way of estimating the variability of the metric. This means that there\u2019s no way of knowing the degree of confidence in the metric. Hence, although we can calculate aggregate coherence scores for a topic model, we don\u2019t really know how well that score reflects the actual coherence of the model (relative to statistical noise).</li><li id=\"72c8\"><strong>Comparability</strong> \u2014 The coherence pipeline allows the user to select different methods for each part of the pipeline. This, combined with the unknown variability of coherence scores, makes it difficult to meaningfully compare different coherence scores, or coherence scores between different models.</li><li id=\"4722\"><strong>Reference corpus</strong> \u2014 The choice of reference corpus is important. In cases where the probability estimates are based on the reference corpus, then a smaller or domain-specific corpus can produce misleading results when applied to set of documents that are quite different to the reference corpus.</li><li id=\"5a31\"><strong>\u201cJunk\u201d topics</strong> \u2014 Topic modeling provides no guarantees about the topics that are identified (hence the need for evaluation) and sometimes produces meaningless, or \u201cjunk\u201d, topics. These can distort the results of coherence calculations. The difficulty lies in identifying these junk topics for removal \u2014 it usually requires human inspection to do so. But involving humans in the process defeats the very purpose of using coherence, ie. to automate and scale topic model evaluation.</li></ul><p id=\"5cab\">With these limitations in mind, what\u2019s the best approach for evaluating topic models?</p><p id=\"1a08\">This article has hopefully made one thing clear \u2014 topic model evaluation isn\u2019t easy!</p><p id=\"6705\">Unfortunately, there\u2019s no straight forward or reliable way to evaluate topic models to a high standard of human interpretability. Also, the very idea of human interpretability differs between people, domains and use cases.</p><p id=\"ae25\">Nevertheless, the most reliable way to evaluate topic models is by using human judgment. But this takes time and is expensive.</p><p id=\"e576\">In terms of quantitative approaches, coherence is a versatile and scalable way to evaluate topic models, notwithstanding its limitations.</p><p id=\"659b\">In practice, you\u2019ll need to decide how to evaluate a topic model on a case-by-case basis, including which methods and process to use. A degree of domain knowledge and a clear understanding of the purpose of the model will help.</p><p id=\"7e42\">The thing to remember is that some sort of evaluation can be important in helping you assess the merits of your topic model and how to apply it.</p><p id=\"d9cf\">Topic model evaluation is an important part of the topic modeling process. This is because topic modeling offers no guidance on the quality of topics produced. Evaluation helps you assess how relevant the produced topics are, and how effective the topic model is.</p><p id=\"1dcb\">Evaluating topic models is unfortunately difficult to do. There are various approaches available, but the best results come from human interpretation. This is a time-consuming and costly exercise.</p><p id=\"a922\">Quantitative evaluation methods offer the benefits of automation and scaling. Coherence is the most popular of these and is easy to implement in widely used coding languages, such as with Gensim in Python.</p><p id=\"c34b\">In practice, the best approach for evaluating topic models will depend on the circumstances. Domain knowledge, an understanding of the model\u2019s purpose, and judgment will help in deciding the best evaluation approach.</p><p id=\"db90\">Topic modeling is an area of ongoing research \u2014 newer, better ways of evaluating topic models are likely to emerge.</p><p id=\"cd6c\">In the meantime, topic modeling continues to be a versatile and effective way to analyze and make sense of unstructured text data. And with the continued use of topic models, evaluation will remain an important part of the process.</p><p id=\"6e32\">[1] J. Chuang, C. D. Manning and J. Heer, <a rel=\"noopener nofollow\" href=\"http://vis.stanford.edu/files/2012-Termite-AVI.pdf\">Termite: Visualization Techniques for Assessing Textual Topic Models</a> (2012), Stanford University Computer Science Department</p><p id=\"7dfa\">[2] J. Chang et al, <a rel=\"noopener nofollow\" href=\"http://users.umiacs.umd.edu/~jbg/docs/nips2009-rtl.pdf\">Reading Tea Leaves: How Humans Interpret Topic Models</a> (2009), Neural Information Processing Systems</p></div></div></section></div>",
"text": "DATA SCIENCE EXPLAINED Here\u2019s what you need to know about evaluating topic models Topic models are widely used for analyzing unstructured text data, but they provide no guidance on the quality of topics produced. Evaluation is the key to understanding topic models. In this article, we\u2019ll look at what topic model evaluation is, why it\u2019s important and how to do it. Contents What is topic model evaluation? How to evaluate topic models Evaluating topic models \u2014 Human judgment Evaluating topic models \u2014 Quantitative metrics Calculating coherence using Gensim in Python Limitations of coherence How to evaluate topic models \u2014 Recap Conclusion Topic modeling is a branch of natural language processing that\u2019s used for exploring text data. It works by identifying key themes \u2014 or topics \u2014 based on the words or phrases in the data that have a similar meaning. Its versatility and ease-of-use have led to a variety of applications. Being a form of unsupervised learning, topic modeling is useful when annotated or labeled data isn\u2019t available. This is helpful, as the majority of emerging text data isn\u2019t labeled, and labeling is time-consuming and expensive to do. For an easy-to-follow, intuitive explanation of topic modeling and its applications, see this article. One of the shortcomings of topic modeling is that there\u2019s no guidance about the quality of topics produced. If you want to learn about how meaningful the topics are, you\u2019ll need to evaluate the topic model. In this article, we\u2019ll look at topic model evaluation, what it is and how to do it. It\u2019s an important part of the topic modeling process that sometimes gets overlooked. For a topic model to be truly useful, some sort of evaluation is needed to understand how relevant the topics are for the purpose of the model. Topic model evaluation is the process of assessing how well a topic model does what it is designed for. When you run a topic model, you usually do it with a specific purpose in mind. It may be for document classification, to explore a set of unstructured texts, or some other analysis. As with any model, if you wish to know how effective it is at doing what it\u2019s designed for, you\u2019ll need to evaluate it. This is why topic model evaluation matters. Evaluating a topic model can help you decide if the model has captured the internal structure of a corpus (a collection of text documents). This can be particularly useful in tasks like e-discovery, where the effectiveness of a topic model can have implications for legal proceedings or other important matters. More generally, topic model evaluation can help you answer questions like: Are the identified topics understandable? Are the topics coherent? Does the topic model serve the purpose it is being used for? Without some form of evaluation, you won\u2019t know how well your topic model is performing or if it\u2019s being used properly. Evaluating a topic model isn\u2019t always easy, however. If a topic model is used for a measurable task, such as classification, then its effectiveness is relatively straightforward to calculate (eg. measure the proportion of successful classifications). But if the model is used for a more qualitative task, such as exploring the semantic themes in an unstructured corpus, then evaluation is more difficult. In this article, we\u2019ll focus on evaluating topic models that do not have clearly measurable outcomes. These include topic models used for document exploration, content recommendation and e-discovery, amongst other use cases. Evaluating these types of topic models seeks to understand how easy it is for humans to interpret the topics produced by the model. Put another way, topic model evaluation is about the \u2018human interpretability\u2019 or \u2018semantic interpretability\u2019 of topics. There are a number of ways to evaluate topic models. These include: Human judgment Observation-based, eg. observing the top \u2019N\u2019 words in a topic Interpretation-based, eg. \u2018word intrusion\u2019 and \u2018topic intrusion\u2019 to identify the words or topics that \u201cdon\u2019t belong\u201d in a topic or document Quantitative metrics \u2014 Perplexity (held out likelihood) and coherence calculations Mixed approaches \u2014 Combinations of judgment-based and quantitative approaches Let\u2019s look at a few of these more closely. Observation-based approaches The easiest way to evaluate a topic is to look at the most probable words in the topic. This can be done in a tabular form, for instance by listing the top 10 words in each topic, or in other formats. One visually appealing way to observe the probable words in a topic is through Word Clouds. To illustrate, the following example is a Word Cloud based on topics modeled from the minutes of US Federal Open Market Committee (FOMC) meetings. The FOMC is an important part of the US financial system and meets 8 times per year. The following Word Cloud is based on a topic that emerged from an analysis of topic trends in FOMC meetings over 2007 to 2020. Word Cloud of \u201cinflation\u201d topic. Image by Author. Topic modeling doesn\u2019t provide guidance on the meaning of any topic, so labeling a topic requires human interpretation. In this case, based on the most probable words displayed in the Word Cloud, the topic appears to be about \u201cinflation\u201d. You can see more Word Clouds from the FOMC topic modeling example here. Beyond observing the most probable words in a topic, a more comprehensive observation-based approach called \u2018Termite\u2019 has been developed by Stanford University researchers. Termite is described as \u201ca visualization of the term-topic distributions produced by topic models\u201d [1]. In this description, \u2018term\u2019 refers to a \u2018word\u2019, so \u2018term-topic distributions\u2019 are \u2018word-topic distributions\u2019. Termite produces meaningful visualizations by introducing two calculations: A \u2018saliency\u2019 measure, which identifies words that are more relevant for the topics in which they appear (beyond mere frequencies of their counts) A \u2018seriation\u2019 method, for sorting words into more coherent groupings based on the degree of semantic similarity between them Termite produces graphs which summarize words and topics based on saliency and seriation. This helps to identify more interpretable topics and leads to better topic model evaluation. You can see example Termite visualizations here. Interpretation-based approaches Interpretation-based approaches take more effort than observation-based approaches but produce better results. These approaches are considered a \u2018gold standard\u2019 for evaluating topic models since they use human judgment to maximum effect. A good illustration of these is described in a research paper by Jonathan Chang and others (2009) [2] which developed \u2018word intrusion\u2019 and \u2018topic intrusion\u2019 to help evaluate semantic coherence. Word intrusion In word intrusion, subjects are presented with groups of 6 words, 5 of which belong to a given topic and one which does not \u2014 the \u2018intruder\u2019 word. Subjects are asked to identify the intruder word. To understand how this works, consider the group of words: [ dog, cat, horse, apple, pig, cow ] Can you spot the intruder? Most subjects pick \u2018apple\u2019 because it looks different to the others (all of which are animals, suggesting an animal-related topic for the others). Now, consider: [ car, teacher, platypus, agile, blue, Zaire ] Which is the intruder in this group of words? It\u2019s much harder to identify, so most subjects choose the intruder at random. This implies poor topic coherence. Topic intrusion Similar to word intrusion, in topic intrusion subjects are asked to identify the \u2018intruder\u2019 topic from groups of topics that make up documents. In this task, subjects are shown a title and a snippet from a document along with 4 topics. Three of the topics have a high probability of belonging to the document while the remaining topic has a low probability \u2014 the \u2018intruder\u2019 topic. As for word intrusion, the intruder topic is sometimes easy to identify and at other times not. The success with which subjects can correctly choose the intruder helps to determine the level of coherence. While evaluation methods based on human judgment can produce good results, they are costly and time-consuming to do. Moreover, human judgment isn\u2019t clearly defined and humans don\u2019t always agree on what makes a good topic. In contrast, the appeal of quantitative metrics is the ability to standardize, automate and scale the evaluation of topic models. Held out likelihood or perplexity A traditional metric for evaluating topic models is the \u2018held out likelihood\u2019, also referred to as \u2018perplexity\u2019. This is calculated by splitting a dataset into two parts \u2014 a training set and a test set. The idea is to train a topic model using the training set and then test the model on a test set which contains previously unseen documents (ie. held out documents). Likelihood is usually calculated as a logarithm, so this metric is sometimes referred to as the \u2018held out log-likelihood\u2019. The perplexity metric is a predictive one. It assesses a topic model\u2019s ability to predict a test set after having been trained on a training set. In practice, around 80% of a corpus may be set aside as a training set with the remaining 20% being a test set. Although the perplexity metric is a natural choice for topic models from a technical standpoint, it does not provide good results for human interpretation. This was demonstrated by research, again by Jonathan Chang and others (2009), which found that perplexity did not do a good job of conveying whether topics are coherent or not. When comparing perplexity against human judgment approaches like word intrusion and topic intrusion, the research showed a negative correlation. This means that as the perplexity score improves (ie. the held out log-likelihood is higher), the human interpretability of topics and topic mixes get worse (rather than better). The perplexity metric therefore appears to be misleading when it comes to the human understanding of topics and topic mixes. Are there better quantitative metrics than perplexity for evaluating topic models? Coherence One of the shortcomings of perplexity is that it does not capture context, ie. perplexity does not capture the relationship between words in a topic or topics in a document. The idea of semantic context is important for human understanding. To overcome this, approaches have been developed that attempt to capture context between words in a topic. They use measures such as the conditional likelihood (rather than the log-likelihood) of the co-occurrence of words in a topic. These approaches are collectively referred to as \u2018coherence\u2019. There\u2019s been a lot of research on coherence over recent years and as a result there are a variety of methods available. A useful way to deal with this is to set up a framework that allows you to choose the methods that you prefer. Such a framework has been proposed by researchers at AKSW. Using this framework, which we\u2019ll call the \u201ccoherence pipeline\u201d, you can calculate coherence in a way that works best for your circumstances (eg. based on availability of a corpus, speed of computation etc). The coherence pipeline offers a versatile way to calculate coherence. It is also what Gensim, a popular package for topic modeling in Python, uses for implementing coherence (more on this later). The coherence pipeline is made up of four stages: Segmentation Probability estimation Confirmation Aggregation These four stages form the basis of coherence calculations and work as follows: Segmentation sets up the word groupings that are used for pair-wise comparisons. Let\u2019s say that we wish to calculate the coherence of a set of topics. Coherence calculations start by choosing words within each topic (usually the most frequently occurring words) and comparing them with each other, one pair at a time. Segmentation is the process of choosing how words are grouped together for these pair-wise comparisons. Word groupings can be made up of single words or larger groupings. For single words, each word in a topic is compared with each other word in the topic. For 2-word or 3-word groupings, each 2-word group is compared with each other 2-word group, or each 3-word group is compared with each other 3-word group, and so on. Comparisons can also be made between groupings of different size, for instance single words can be compared with 2-word or 3-word groups. Probability estimation refers to the type of probability measure that underpins the calculation of coherence. To illustrate, consider the two widely used coherence approaches of UCI and UMass: UCI is based on point-wise mutual information (PMI) calculations. This is given by: PMI(wi,wj) = log[(P(wi,wj) + e) / P(wi).P(wj)], for words wi and wj and some small number e, and where P(wi) is the probability of word i occurring in a topic and P(wi,wj) is the probability of both words i and j appearing in a topic. Here, the probabilities are based on word co-occurrence counts. UMass caters for the order in which words appear and is based on the calculation of: log[(P(wi,wj) + e) / P(wj)], with wi, wj, P(wi) and P(wi,wj) as for UCI. Here, the probabilities are conditional, since P(wi|wj) = [(P(wi,wj) / P(wj)], which we know from Bayes\u2019 theorem. So, this approach measures how much a common word appearing within a topic is a good predictor for a less common word in the topic. Confirmation measures how strongly each word grouping in a topic relates to other word groupings (ie. how similar they are). There are direct and indirect ways of doing this, depending on the frequency and distribution of words in a topic. Aggregation is the final step of the coherence pipeline. It\u2019s a summary calculation of the confirmation measures of all the word groupings, resulting in a single coherence score. This is usually done by averaging the confirmation measures using the mean or median. Other calculations may also be used, such as the harmonic mean, quadratic mean, minimum or maximum. Coherence is a popular way to quantitatively evaluate topic models and has good coding implementations in languages such as Python (eg. Gensim). To see how coherence works in practice, let\u2019s look at an example. Gensim is a widely used package for topic modeling in Python. It uses Latent Dirichlet Allocation (LDA) for topic modeling and includes functionality for calculating the coherence of topic models. As mentioned, Gensim calculates coherence using the coherence pipeline, offering a range of options for users. The following example uses Gensim to model topics for US company earnings calls. These are quarterly conference calls in which company management discusses financial performance and other updates with analysts, investors and the media. They are an important fixture in the US financial calendar. The following code calculates coherence for the trained topic model in the example: Calculating the coherence score using Gensim The coherence method that was chosen in this example is \u201cc_v\u201d. This is one of several choices offered by Gensim. Other choices include UCI (\u201cc_uci\u201d) and UMass (\u201cu_mass\u201d). For more information about the Gensim package and the various choices that go with it, please refer to the Gensim documentation. Gensim can also be used to explore the effect of varying LDA parameters on a topic model\u2019s coherence score. This helps to select the best choice of parameters for the model. The following code shows how to calculate coherence for varying values of the alpha parameter in the LDA model: Investigating coherence by varying the alpha parameter The above code also produces a chart of the model\u2019s coherence score for different values of the alpha parameter: Topic model coherence for different values of the alpha parameter. Image by Author. This helps in choosing the best value of alpha based on coherence scores. In practice, you would also want to check the effect of varying other model parameters on the coherence score. You can see how this was done in the US company earning call example here. The overall choice of parameters would depend on balancing the varying effects on coherence, and also on judgment about the nature of the topics and the purpose of the model. Despite its usefulness, coherence has some important limitations. According to Matti Lyra, a leading data scientist and researcher, the key limitations are: Variability \u2014 The aggregation step of the coherence pipeline is typically calculated over a large number of word-group pairs. While this produces a metric (eg. mean of the coherence scores), there\u2019s no way of estimating the variability of the metric. This means that there\u2019s no way of knowing the degree of confidence in the metric. Hence, although we can calculate aggregate coherence scores for a topic model, we don\u2019t really know how well that score reflects the actual coherence of the model (relative to statistical noise). Comparability \u2014 The coherence pipeline allows the user to select different methods for each part of the pipeline. This, combined with the unknown variability of coherence scores, makes it difficult to meaningfully compare different coherence scores, or coherence scores between different models. Reference corpus \u2014 The choice of reference corpus is important. In cases where the probability estimates are based on the reference corpus, then a smaller or domain-specific corpus can produce misleading results when applied to set of documents that are quite different to the reference corpus. \u201cJunk\u201d topics \u2014 Topic modeling provides no guarantees about the topics that are identified (hence the need for evaluation) and sometimes produces meaningless, or \u201cjunk\u201d, topics. These can distort the results of coherence calculations. The difficulty lies in identifying these junk topics for removal \u2014 it usually requires human inspection to do so. But involving humans in the process defeats the very purpose of using coherence, ie. to automate and scale topic model evaluation. With these limitations in mind, what\u2019s the best approach for evaluating topic models? This article has hopefully made one thing clear \u2014 topic model evaluation isn\u2019t easy! Unfortunately, there\u2019s no straight forward or reliable way to evaluate topic models to a high standard of human interpretability. Also, the very idea of human interpretability differs between people, domains and use cases. Nevertheless, the most reliable way to evaluate topic models is by using human judgment. But this takes time and is expensive. In terms of quantitative approaches, coherence is a versatile and scalable way to evaluate topic models, notwithstanding its limitations. In practice, you\u2019ll need to decide how to evaluate a topic model on a case-by-case basis, including which methods and process to use. A degree of domain knowledge and a clear understanding of the purpose of the model will help. The thing to remember is that some sort of evaluation can be important in helping you assess the merits of your topic model and how to apply it. Topic model evaluation is an important part of the topic modeling process. This is because topic modeling offers no guidance on the quality of topics produced. Evaluation helps you assess how relevant the produced topics are, and how effective the topic model is. Evaluating topic models is unfortunately difficult to do. There are various approaches available, but the best results come from human interpretation. This is a time-consuming and costly exercise. Quantitative evaluation methods offer the benefits of automation and scaling. Coherence is the most popular of these and is easy to implement in widely used coding languages, such as with Gensim in Python. In practice, the best approach for evaluating topic models will depend on the circumstances. Domain knowledge, an understanding of the model\u2019s purpose, and judgment will help in deciding the best evaluation approach. Topic modeling is an area of ongoing research \u2014 newer, better ways of evaluating topic models are likely to emerge. In the meantime, topic modeling continues to be a versatile and effective way to analyze and make sense of unstructured text data. And with the continued use of topic models, evaluation will remain an important part of the process. [1] J. Chuang, C. D. Manning and J. Heer, Termite: Visualization Techniques for Assessing Textual Topic Models (2012), Stanford University Computer Science Department [2] J. Chang et al, Reading Tea Leaves: How Humans Interpret Topic Models (2009), Neural Information Processing Systems"
}

Now lets combine the code and save the results to a csv file.

import requests
import feedparser
import csv
import json

url = "https://api.promptapi.com/pipfeed"

headers= {
  "apikey": "YOUR_API_KEY"
}

#File we want to save the articles to
csv_file = "articles.csv"

def extract_article(article_url):
payload = article_url
response = requests.request("POST", url, headers=headers, data = payload)
status_code = response.status_code
result = response.text
return json.loads(result)


def save_to_css(extracted_articles):
keys = extracted_articles[0].keys()
with open(csv_file, 'w', newline='')  as output_file:
    dict_writer = csv.DictWriter(output_file, keys)
    dict_writer.writeheader()
    dict_writer.writerows(extracted_articles)


NewsFeed = feedparser.parse("https://towardsdatascience.com/feed")
extracted_articles = list()

print("Total entries found in feed: "+ str(len(NewsFeed.entries)) +"\n")
i =0
for entry in NewsFeed.entries:
print(str(i) + ": Extracting url: " + article_url)
extracted_article = extract_article(entry.link)
extracted_articles.append(extracted_article)

print(extracted_articles)
print("Saving articles to csv")
save_to_css(extracted_articles)

The above code will extract all the articles that appear in the RSS feed for https://towardsdatascience.com/feed and save them to a csv file called “articles.csv”.

You can now use this data for training models, analytics and whatever you may feel like. Let us know what you think about the APIs and tutorial in the comments.

One thought on “[Tutorial] Extract full news article content from any RSS feed using Extract API

Leave a Reply

Your email address will not be published.

How does Pipfeed’s recommendation Engine work?

At Pipfeed we take price in creating a world-class recommendation engine and have spent a lot of time perfecting it. It is hard to create a recommendation engine and even hard to measure how well the algorithm is working. Here is how Pipfeed’s recommendation engine works.

The main feed is comprised of “Plugins”. Each of these plugins “compete” to add articles to the user’s feed.

Pipfeed’s plugins

Pipfeed has a plugin system internally and each of the plugins have different approach to add articles to the user’s main feed.

  • Subscriptions: Add articles from the blogs user has subscribed to
  • Interests: Adds articles from user’s Interests on their profile
  • Trending: Adds articles that are trending on PipFeed
  • Past Liked Blogs: Adds articles from user’s previous most liked blogs
  • Past Liked Interests: Adds articles from user’s past liked Interests
  • Collaborative filtering: Adds articles similar to previously read articles
  • Content-based filtering: Find users with similar reading behavior and then adds from that user’s past history
  • And more…

The Feedback Loop

Pipfeed’s recommendation engine re-balances the importance given to each plugin based on user’s interactions. So we measure articles from which plugin is the user most interacting with and that plugin gets to put more articles in the user’s feed.

So the system keeps “rebalancing” itself over time and as the user uses the app more, the more personalized the feed becomes.

Leave a Reply

Your email address will not be published.

8 lessons we learned building PipFeed mobile app

So I have spent my last 3 months building PipFeed.com. PipFeed is an A.I. powered curated reading app. Think of it as when Pocket met Medium. I am an ex-AWS engineer and my main language of choice is JAVA but I can code in PHP, python, js etc.

The backend for PipFeed is in JAVA with a few services in python, node.js & PHP. In January 2020 I learned Flutter and built the mobile app for PipFeed using it. This is my first experience building a mobile app and here are some of the learnings that I would like to share with you all.

1) Have a CI/CD

For mobile apps, it is extremely important to have a CI/CD. We use codemagic, it ties up nicely with our Flutter ecosystem. Mobile apps need a lot more work to release like signing, bundle, etc etc.

P.S. I still don’t have a CI/CD for my backends and I deploy my AWS backend from my laptop using Cloudformation templates.

2) Mobile apps crash a lot

As compared to websites, backend code, and stand-alone software, mobile apps fail and throw exceptions a lot more. There are all sorts of errors like network errors, some image fails to load etc.

To make the app stable, we had to add a lot of exception handling, null checks, and mostly retries in almost all network calls.

3) Distributed logs

We use Sentry & Crashlytics to manage and catch all our exceptions & logs. We really love the service and it has a nice pricing model. We have now integrated our backend in JAVA with Sentry.

4) Analytics is everything

We use a mix of firebase, mixpanel & AWS cloudwatch for our analytics. Analytics on mobile is a bit harder as everything you want to track requires a code change. Also, that would mean a lot of network calls for each event.

We found a way and moved the analytics to our backend system. So on an action like read, like, comment, etc when there is a database update, we trigger lambdas using AWS DDB stream and send metrics from there to other services. This helps us reduce the overall number of network requests and makes it easier to have the logic out of the mobile app.

5) It’s so hard to A/B test

As compared to any other software A/B testing is really hard on mobile. With websites you can have a version that people will see when they visit your URL with mobile your users will have a whole mix of versions, devices, screen-sizes and a lot more variables. So we are still figuring out how to do this.

Implementing A/B testing is harder in mobile as we need to create two separate code blocks and use firebase remote config or something similar to run once code block for some users and another for some other.

6) Login/SignUp is the most important screen in your app

The most important piece of code in your mobile app is your login/signup page. We learned this the hard way. We never paid much attention to our sigun/login page but when we analyzed the analytics we saw that from all the people who “installed” the app only 60% were actually signing up.

Hence we now do rigorous testing of our signup page. After so much work we still keep finding a lot of bugs/error in our login & signup flow.

7) Updates are harder because of backward compatibility

With AWS, I am used to pushing to Production almost every other day. But with mobile, you need to take care of backward compatibility a lot more.

Like we have a bug in our app where a blog will show as being “unfollowed” even when the user is following it. This was caused by two bugs with one being in the mobile app and another in our backend. Now if I fix the backend bug the current mobile app will break. So what we do? We fixed the bug in the mobile app and after around 15 days or when 90% of users have updated to the latest app, we will push fix to the backend.

8) Have a clear 2-way communication channel with users

It is very important to have some way in which users can reach out to you. At first, we thought just asking our users to email us would be enough but I think around 5% of all users ever emailed us. That’s when we found Instabug. This is a really amazing tool and they provide a 2-minute installation. I think in all the tools mentioned here, this was the easiest to setup. We run all sorts of surveys including Net promoter Score surveys, survey to nudge users to upgrade etc. Also, Instabug lets users submit bugs reports and even vote on feature requests.

If you have a mobile app, it is very important to have a NPS survey. This is probably the one metrics VCs really care about.


That’s all for now. These were the learning I have had from working on mobile apps for the last 3 months. I hope these were helpful.

Do checkout PipFeed. The app is both in the play store and app store and let me know what you guys think.

Leave a Reply

Your email address will not be published.

SEO is killing good content and it needs to stop!

We have access to an infinite amount of information in this modern age, practically you can search for anything you want. And with this comes a need to filter out the noise to ensure that the appropriate user gets to the appropriate content and vice versa.  Search Engine Optimization (SEO) seeks to make the quest as user-friendly as possible. However, if you want to create content then you must know the significance of knowledge about how certain algorithms and requirements work to move the content to the top. Top blogs use the technique of SEO to get them highly ranked. But as we focus on the SEO more than the quality of content, it seems that we are trying to please the robots instead of humans.

Evolution of SEO:

In the year 1945 search engines were the first thing as a way to store information and make it available to the masses. Big names, like Yahoo and Google, came on the table in the early 90s and more and more companies have been accused of tricking the easy search system to get their content to the top. Examples of these crimes include; broken links; keyword filling; and content farms (too frequent low-quality posts). That is why SEO was born. 

Evolution of SEO

In the past, content creators used to focus on the theme and how to engage their readers. This meant delivering original content that wasn’t already everywhere and, where needed, presenting it in a style that was original to them. But all of this changed after Google took charge of the creation and delivery of content around the world. Now, they have learned to think SERP first instead of thinking subject. How will their content be found amongst people? What words do they need to cover? How do they use the word hyperlink anchor text to train Google about the terms of the search it will link with those websites? Now the top blogs are only those which are SEO optimized.

Google is not to be entirely blamed for it. Processing the knowledge of the world and then presenting it to several items at a time is not a trivial job, particularly if you have hundreds of marketing pros learning to play the game to place their material above all others. But it leaves content creators struggling to second guess how their content will be viewed by Big G. This ensures that what might be good content is tainted with hidden agendas, and this leaves us with an Internet that is more filled with trash than it is valuable content.

Through time, SEO laws have become more complex and more than 200 standards are now to be met. But are those general scales going to allow the top quality content?

How SEO causes dilution of web integrity?

Most of the top blogs contain SEO focused content and it dilutes web integrity but how?

First of all, it is much easier to create SEO optimized content because most of the content creators steal the content from the other top blogs related to the topic and give it the name of inspiration. Since the same content is written repeatedly with a slight difference nobody comes to know who wrote it first. Secondly, the primary aim for an SEO-focused blog post is to connect a site with a common search subject, which means it’s a subject that has already discussed elsewhere ad infinitum. Moreover, SEO does not promote detailed blog posts and suggests limiting blogs to minimum words that’s why most of the top blogs are not more than five hundred words. The content covered in ten blogs on google can be found collectively in a single book.

Can good quality content reach people without SEO?

So from the above discussion, it is clear that SEO compromises the quality of the content but what is the solution to this? Since ranking your blog on top in the Google search results ranking is necessary for making your blog reach maximum people and consequently for making it a top blog. You don’t need to worry about it anymore since Pipfeed.com is here to help you.

What is Pipfeed and how it works?

Pipfeed is an article reading application based on algorithms of artificial intelligence. We learn from user experiences and can decide which articles the user would like to read and in particular which articles the user wouldn’t like to see.

 We scans hundreds of articles each day and uses Artificial Intelligence to find relevant articles for each user. It is like having a magazine that is curated, edited and published just for you. We not only makes various readers around the world to access your content but also enables them to interact with you and reach out to you.

Most apps only focus on news and not articles. So Pipfeed is the best app to get your blogs published and make it reach out to maximum people. Use Pipfeed for your blogs and they will become top blogs.

Conclusion:

Writing is moving away from delivering a message, making a point or telling a story but now it’s all about traffic and clickbait. The good thing about this is that as the epidemic rises, so does the understanding of it among cultures. People are more informed about ‘what they read on the internet’ and companies are innovating new user interaction strategies. Overall, writers seem to be becoming more educated but less imaginative. Less first, but no doubt more roped-up. However, you can get your blogs featured as top blogs with the help of Pipfeed which will help you get rid of the tension about SEO and focus on the quality of the content.

So what are you waiting for? Submit your blog today on Pipfeed.com and within a few days, your blog will become one of the top blogs among readers on Pipfeed. Many content writers are using it and now they don’t have to worry about SEO while creating content, they just focus on the quality of content. Now you don’t have to think about common search terms or keywords, anchor text or word count while you are writing. We will let you concentrate on the content. We hope so, Google will upgrade its algorithm too since SEO is killing good content and it needs to stop!

Leave a Reply

Your email address will not be published.

3 Big Problems with Social Media as a News Source (and the Solution)

It’s no surprise that social media platforms are serving as virtual newsstands. If news is meant to travel fast than there’s no faster informational highway than social media. However, there are three fundamental concerns about consuming news from social feeds. The articles are often limited to your friends’ circle, not relevant to your interests, and come from untrusted sources.

Thankfully, there is a new way to retake control of your news feed without reviving newspaper subscriptions.

3 Main Reasons You Should Rethink Social Media as a News Source

There are plenty of reasons you should turn away from social media for news and information, but here are the top three that should be enough to convince you to look elsewhere.

News Limited to Your Friends’ Circle

Regardless of how many friends, follows, or followers you have online, social media platforms rely heavily on shared information. What appears in your social media feed, therefore, is greatly controlled by your friends and follows. Instead of controlling your news feed, you are at the mercy of your friends clicking ‘Share’.

Social media is great for connecting you with like-minded people, but it does fall short in broadening your horizons. It also falls short in challenging your intellect further with the same old basic information circulating your social feeds. You need to be able to easily target new ideas and dive deeper into your interests.

Articles Not Relevant to Your Interests

You probably have a lot in common with your social media friends, but articles of special interest to you are probably far and few between. Sure, there’s plenty of posts on parenting, exercise, and food, but is that all there is to read about? What if you want to go beyond the basics of those topics? For instance, you want to learn more about healthy food but only want articles for picky eaters. There is a blog for that! But you’re likely not going to come across it on social media.

And what if you have less-mainstream interests you want to learn more about? Let’s say you are more interested in articles about coding and artificial intelligence than what exercises will tighten your buns. Unless the majority of your friends share your interest, your social media feed will rarely offer relevant content.

Untrusted Sources

There’s a reason #FakeNews is always trending. Social media is chock full of articles that are chock full of false information. And the inaccuracies are not limited to traditional news and politics. There are countless “experts” on social media that promote their opinions as truth. Not to mention the trolls and pranksters who purposely spread fake news. And unfortunately, everybody is guilty of sharing articles before validating their authenticity.

A Better Information and News Source than Social Media

The obvious solution to consuming higher quality information is to go directly to sources you trust and are interested in. But unfortunately, that takes quite a lot of digging and jumping between apps, websites, and blogs. Thankfully, there’s a more user-friendly solution.

Like you, we were concerned and wanted a better way to connect with blogs, news, and our hobbies. So, we developed Pipfeed. In the same way your favorite music app customizes your music based on your interests, Pipfeed customizes your reading material.

Pipfeed is a personal reading app that connects you with articles and blog posts based on your interests. Our AI technology recommends the top blogs and delivers the best articles straight to your personalized Pipfeed. Visit our website to download the free app today.

Check out the Top Articles on PipFeed today.

Leave a Reply

Your email address will not be published.

Top 10 blogs that will make you a better leader

Leadership roles are constantly evolving in the current business scenario. The responsibility a leader shoulders supersede the benefits it entails. The diversity that corporates are embracing now is paving the way for creating teams that are manned with diverse members. The leader, in such cases, is the glue that holds the team together and gets the best out of each of them. If you are aspiring to be a prolific leader who is ambitious about taking your company to greater heights, blogs are a great source of valuable leadership lessons.

Here are a few leadership blogs whose valuable insights will help shape you to be a better leader.

Great Leadership

Dan McCarthy - The Balance

Great Leadership provides you with actionable leadership opinions and information that will help improve your leadership capabilities. This blog is the creation of Dan McCarthy, who has over 20 years of experience in coaching aspiring leaders through his speaking engagements, books, and articles. In this blog, Dan reminisces about the leadership opportunities and challenges that he was faced with during his tenure in addition to coaching aspiring leaders.

Eric Jacobson On Management And Leadership

Eric Jacobson (@EricJacobsonKC) | Twitter

Eric Jacobson’s blog is a great place for leaders and managers to pick up valuable tips, ideas, and techniques on management and leadership. This blog also publishes posts that challenge the aspirants to upskill their leadership traits by providing clear instructions that are instantly applicable. The blogger’s experience as a strategic planner, marketer, and leader adds great value to the insights he shares.

Joan Garry Consulting

About Joan Garry - Nonprofit Leadership Lab

It is important to study the strategies of leaders who operate in different business spheres. Following this blog will give you strategic insights by Joan Garry, who has gained years of experience by successfully operating in both entertainment and non-profit businesses. Her blog is directed towards board leaders and executive directors and discusses topics such as leadership attributes, disaster planning, and handling staff burnout, among others.

How We Lead

Servant Leadership Is Not What You Think: Ken Blanchard Explains

The blogger behind How We Lead, Ken Blanchard, is a renowned author, speaker, and consultant in the area of leadership. He is zealous in sharing his idea on servant leadership, and thus, his blog publishes posts that help leaders inculcate positivity and humility in their practices and decisions. 

Leadership Insights by Skip Prichard

Skip Prichard | Leadership Insights

This blog publishes leadership insights by Skip and other renowned leaders, creating a valuable continuum of posts that will help you work your way to become a better leader. This blog too, believes in the concept of servant leadership and highlights the impact of its adoption in organizational success. You can also find interviews on this blog wherein visionaries share priceless practical acumen based on their leadership journey. 

Jesse Lyn Stoner on Leadership

How Leaders Can Develop An Effective Organizational Vision

Jesse Lyn has over 30 years of experience in holding the roles of a business consultant and executive wherein she relentlessly worked to create high performing collaborative organizations. Her passion in this area compelled her to start the Seapoint Center, which is a network of leadership experts who coach and assist aspirants to develop the required skills to impact the world positively. Topics such as diversity, inclusion, and goal achievement, are discussed unabashedly by Jesse and other guest bloggers here.

SmartBrief

SmartBrief: The Newsletter for Informed Educators @coolcatteacher

SmartBrief publishes leadership posts authored by bloggers, corporate executives, and thought leaders operating in various industries. No matter which industry you operate in, SmartBrief has insights that are relevant for you. The blog also suggests and gives you access to newsletter subscriptions that will enable you to keep yourself updated about leadership practices that are trending in your industry.

John C. Maxwell Blog

John C. Maxwell (Author of The 21 Irrefutable Laws of Leadership)

Having authored over 100 books on the topic, John has invested himself in posting leadership insights on his blog for the immediate consumption of aspiring leaders. He trains and challenges aspirants to become better leaders by maximizing their potential through learning and leading with excellence. Topics on leadership decision-making, dealing with setbacks, and leadership focus, among others, are discussed by multiple authors on this blog.

Brian Tracy

Team Building Quotes by Brian Tracy – TBAE Team Building Blog

Brain Tracy, a CEO with extensive expertise in leadership strategy, imparts valuable insights on his blog to help you develop and wield leadership capabilities. The blog features posts on numerous leadership topics, including self-motivation, optimism, and tips on productivity and profitability. Brian uses his experience as leadership coach and speaker to deliver acumens that are apt for budding leaders.

Leadership Freak

A Conversation with "The Leadership Freak" Dan Rockwell 06/12 by ...

This blog by Dan Rockwell is packed with short yet powerful posts that can educate and challenge you to sharpen your leadership skills. Dan has been listed among the best leadership experts and speakers by various reputed establishments, including Inc Magazine and AMA. The blog features topics such as creating opportunities, leadership stress, and emotions in leadership, that provide impactful insights that can educate and strengthen aspiring leaders.

Leave a Reply

Your email address will not be published.

Top 10 Food Blogs that will Tickle your Taste Buds

Are you a hardcore food-lover? Do you love to experiment with new dishes in your kitchen? If yes, then we have brought a list of the top food blogs that you ought to follow.

Dishing up the dirt

Whether you have friends coming over or simply want to enjoy a meal with your family, this blog has it all. 

Andrea Bemis is a veggie lover and her love for food stems led her to starting this blog. Whatever your favorite vegetable is, you will find a lip smacking recipe on this blog. Her Moroccan carrot soup with spiced chickpea croutons is a big hit among her followers.

Pinch of yum

Are you looking for some yummy recipes? Then head straight to this blog started by a teacher turned blogger. Follow the Pinch of yum blog for some finger licking recipes and you will not be disappointed. 

Whether its breakfast, lunch, dinner, or snacks, this page has hundreds of recipes to satiate your taste buds. 

Budget Bytes

Budget-friendly and well-written are the two adjectives that beautifully describe this blog. If you want to enjoy tasty food without burning a hole in your pocket, then Budget bytes is your savior. 

With simple and easily available ingredients, you can make tasty food the inexpensive way.  

Damn Delicious

Chungah started this blog as a hobby and it has now turned into a full-time job for her. With the help of simple and fresh ingredients, she creates several mouth-watering dishes that are loved by her readers all over the world.

Minimalist baker

If you are looking for healthy and gluten-free recipes then this blog has got you covered. The blog has some of the most flavorful and simple to make recipes that make it a hit with the followers.

Gimme some oven

Gimme some oven is a wonderful blog that will provide you with some easy to prepare and tasty food recipes. You will also find a lifestyle and DIY section on this blog. Ali Ebright started this blog to collect all her recipes in one place and today it is one of the well-known food blogs.

Cookie and Kate

If you are not following this blog, then you are missing out on some really delicious vegetarian recipes. You will find some interesting recipes that can be made with real food yet turn out absolutely scrumptious.

 (Image Link: https://cookieandkate.com/images/2020/03/20-easy-diy-pantry-recipes-550×824.jpg)

Love n Lemons

You will find all colorful food with a touch of lemon on this page. There are several vegetarian recipes that will attract your attention and tickle your taste buds. Head on to the blog for some wonderful and flavorsome recipes. 

Sally’s baking addiction

The Sally’s baking addiction is dedicated to all baking fans. From cookies to cakes you will find the most alluring baking recipes on this page that you cannot resist.

This food blogger has 3 best-selling cook books in her kitty too.

 Sprouted Kitchen

This food blog is perfect for all those who need family recipe ideas. There is a vast variety of recipes for your family. From appetizers to bread, and dessert to baby food recipes, this is the ultimate food blog to follow.

These blogs will provide you with some delicious and healthy recipes. If food is your bae you must visit these blogs for fun and new food recipes.

Leave a Reply

Your email address will not be published.

Marketing Matters: Know Where Your Customers are to Reach Your Target Audience

Marketing your business has a direct impact on your success or failure. If your marketing strategy is nothing more than trying out a few ads, you are missing out on a range of opportunities to engage with new customers. Small business owners are busy, and this often means that marketing efforts take a back seat while the rest of the work of running the business is done. Even if your time and budget for marketing are limited, learn where your customers are hanging out and make a targeted effort to reach your customers.

Social Media and Building Platforms

Where you focus your social media efforts will depend on your customer base. You will want to set up profiles on a number of social media platforms and learn how to use each one. Invest in finding followers for your business profile, and know what it takes to engage your followers. Good content that attracts customers will help you grow your list of followers and provide you with a large base to build upon.

Reaching Out to Customers

As you gain traction on social media platforms, consider what works for your business when it comes to content creation. If you run a restaurant, you can share last-minute deals that are time-limited on nights you aren’t busy. You can create content that showcases your goods or services, or that answers relevant questions in your industry. Engage with your customers so that they can get to know who you are as a company and start building loyalty to your brand.

Try a Variety of Strategies

Social media marketing, content creation, radio ads, and printed materials can all make a difference in your marketing strategies. Take the time to try a number of strategies to see what works for your business. It may take some effort to try new ideas, but it can be worth it when you are able to bring in new customers.

Marketing is always evolving. Learn about social media platforms, and don’t be afraid to try new things. Focus on content creation and see what engages your customers. Once you determine what your customers want to know more about, it becomes easier to develop content and reach out to get your customers interested in what you have to offer.

Leave a Reply

Your email address will not be published.

Be informed about “CoronaVirus”! Read these articles selected by PipFeed

These articles were selected by PipFeed’s A.I. algorithm. You can search for more keywords here: https://blog.pipfeed.com/top-articles/?topic=coronavirus&requestType=SEARCH

Search results for: coronavirus

The coronavirus acceleration is upon us

The Coronavirus Acceleration Is Upon Us

By Digiday


How to Misinform Yourself About the Coronavirus

How To Misinform Yourself About The Coronavirus

By The Atlantic – World Edition


Somebody Was Infected With The Coronavirus After Attending A

Somebody Was Infected With The Coronavirus After Attending A “Coronavirus Party” In Kentucky

By BuzzFeed


A Frontline Physician Speaks Out on the Coronavirus

A Frontline Physician Speaks Out On The Coronavirus

By The Atlantic – World Edition


Compaq and Coronavirus

Compaq And Coronavirus

By Stratechery


My Hometown Is Being Ravaged by the Coronavirus

My Hometown Is Being Ravaged By The Coronavirus

By The Atlantic – World Edition


Even the Coronavirus Has a Silver Lining

Even The Coronavirus Has A Silver Lining

By The Ascent


Can a fart give you coronavirus?

Can A Fart Give You Coronavirus?

By Boingboing


The U.K.’s Coronavirus ‘Herd Immunity’ Debacle

The U.K.’S Coronavirus ‘Herd Immunity’ Debacle

By The Atlantic – World Edition


Pregnant During Coronavirus

Pregnant During Coronavirus

By Healthyhappylife


Defending yourself against coronavirus scams

Defending Yourself Against Coronavirus Scams

By Stackoverflow


Read articles like these and much more.

Download PipFeed today: http://blog.pipfeed.com/

Leave a Reply

Your email address will not be published.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy