Saturday: Our office is closed today but we are still taking calls on 01992 582 824


The battle between Waterfalling and Header bidding in today’s programmatic advertising landscape

The advent of technology has changed the way brands and agencies buy digital advertising placements on publications around the world. It is a human attention business where publishers strive to create valuable and newsworthy content on their digital assets,  with the aim of attracting as many readers or eyeballs as possible. On the other hand, advertisers like brands and agencies  bid to have their advertising banners and videos appear alongside the articles, videos or content of a variety of publishers.

Publishers are concerned with maximising yield (i.e. making as much revenue from the ads on their sites) while advertisers (brands or agencies) are more interested in attaining the best return on their spend Click To Tweet. In a nutshell, publishers are more interested in making the best revenue per thousand impressions (RPM) whilst advertisers are seeking the most efficient cost per thousand impressions (CPM), which also helps in the achievement of the marketing goals such as brand awareness, clicks to website, leads, online purchase or offline store visits. Waterfalling and Header bidding are two ways publishers auction their ad inventory to advertisers. We’ll look at both concepts in detail in the subsequent sections of this blog and see how they compare.

Publishers are always concerned with yield optimisation or,  in layman terms, making the highest buck from advertisers. Have you ever watched popular property auction shows like Homes Under The Hammer? In Homes Under The Hammer, residential, commercial or plots of lands are auctioned to buyers who bid until the highest offer secures the property. The auction can carry on for a while depending on the number of interested buyers and bids and counter bids received. This could be likened to a Waterfalling auction process in programmatic advertising. Although, with Waterfalling, the unsold ad inventories (or properties in an auction show) are moved to other buying networks.

On the other hand, in the case of Header bidding, imagine a property auction programme where houses are simultaneously offered to a variety of auction houses at the same time. Assuming, we’ve got 5 buyers each in 10 separate property auction houses, the presenter can open the auction to all 10 property houses at the same time, which increases the poll of potential buyers interested in the property and will most likely increase the number and amount of bids received. At the end, the property is likely to sell for more with the Header bidding model, where a presenter links with multiple property auction houses at the same time rather than offering to just one auction house at a time and moving down the hierarchy of property houses. Waterfalling and Header bidding work in a similar fashion. Let’s get into the actual meanings of these concepts and how they impact the programmatic landscape.

 

So what is Waterfalling?

Waterfalling in this respect does not have anything to do with the popular Victoria or Niagara Falls. Waterfalling in programmatic advertising refers to a process where publishers move their advertising inventories from one Supply Side Platform (SSP) to the next with the aim of achieving the best bids or yields. Publishers use a Waterfall process or technique to attain the highest possible yields and sell-through-rates. This auction process is also nicknamed “daisy chaining.”

In this format, publishers often initially offer their inventories to advertising networks that usually offer higher bids and work their way down to platforms with lower rates. It helps them monetise or sell off as many inventories as possible. Going back to the property auction example, it is like starting with an auction house with very rich buyers and offering the leftover properties to property houses with less wealthy or prudent buyers. The publisher normally sets price floors, which are the lowest amount they are willing to offer ad impressions for (blocks of a thousand of impressions) and present these to SSPs with high-spending advertisers and then lower the price floors of unsold inventories as they offer them to lower spending networks.

 

Now, what is Header bidding?

Header bidding was introduced to improve the efficiency and yields of publishers. It is often considered an improved version of the Waterfalling technique and is increasingly adopted by major publishers across the globe. Within the industry, it is also known as pre-bidding or advance bidding. The idea behind Header bidding is that publishers offer their ad inventories simultaneously to all possible Demand Side Platforms (DSP) for auction.

With this, there is a clear and real-time visibility of bidders and the highest offer from an advertiser wins the ad impression on the publisher’s site. The thought behind this technique is to prevent moving inventory back and forth, which could ultimately lead to unsold placements and utter ad wastage. Header bidding, therefore, seeks to help publishers make more money and also ensures advertisers can secure a quick placement for the right bid.

 

Concerns with Header bidding and how they are addressed

To implement header bidding, publishers are required to add a simple JavaScript wrapper to their website from leading sources like Prebid.org. We do understand the immense positives of this auction process but one of the negatives expressed by some publishers is that of page latency or slower page load times for users. But the Prebid open source project has addressed this concern and fear by ensuring ad calls are asynchronous and loads together with the page content. This confirms that all Prebid calls are made concurrently within 100 milliseconds, comma which guarantees that users do not have to wait for too long before the articles or content of a publisher are loaded.

Technology has certainly changed the ways in which ads are bought from publishers as the role of a trading desk or human salespeople has been reduced or replaced by a more efficient programmatic method. There are a lot of terminologies and elements to programmatic advertising but it is certainly a great way of buying digital ads. The aim of the publisher is to achieve the highest yield for their inventory whilst ensuring the experience of users is not negatively impacted by irrelevant ads or page latency. On the other hand, the aim of advertisers is to achieve their advertising goal and also ensure there is no damage to their brand in the process.


The Brighton SEO conference of September 2017

Brighton SEO ConferenceThe BrightonSEO Conference brought together great digital marketing minds in the spacious Brighton Centre. The conference was an all-day event and was broken down into four sessions with a keynote at the end. The sessions touched on different aspects of digital marketing such as technical SEO, content, APIs (Application Programming Interface), PR, paid social, paid search, chat-bots, mobile and a variety of other related aspects.

There was a plethora of information to consume from the variety of speakers that were on stage, flipping through slides of experiments, charts, codes and dashboards. The climax of the conference was having the opportunity to listen to the interview of Gary Illyes, Google’s Web Master Trend Analyst. Within the field of SEO, there are certain developments that seemed a bit unclear but the keynote session made people more enlightened and relaxed. Jennifer Slegg from the Search Engine Marketing Post conducted the interview and probed Gary to shed light on burning SEO issues.

The highlight of Brighton SEO was listening to the live interview of Gary Illyes, Google’s Web Master Trend Analyst. Click To Tweet

Key points from the interview with Google’s Gary Illyes (@methode)

  • HTTPS: A few of the questions touched on the impact of HTTPS (Hyper Text Transfer Protocol Secure). HTTPS is important for security, user confidence and SEO. Gary stated that Google will not penalise any sites that fail to migrate to HTTPS in the future. However at Cariad we continue to see sites which have not converted to HTTPS dropping in organic rankings and therefore are not in total agreement with him, it would be good to see further evidence.
  • Google’s Penguin Algorithm: He reemphasised that Google’s Penguin Algorithm is now in real real-time and we shouldn’t expect any further announcements or updates. However, a real-time status of Penguin does not prevent manual action or penalty in areas of unnatural link building. He reiterated that any manipulative process taken to acquire links to deceitfully influence rankings could lead to a manual action.
  • Link Audit: Search engine optimisation professionals regularly audit the inbound links to a client’s website to spot the presence of any low-quality or unnatural link. Gary stated that regular link audits are purely a waste of time. He gave an example of how he rarely reviews the inbound links to his personal site that attracts over 100,000 visits a week. This was quite astonishing as many of the attendees felt running a regular link analysis would be helpful in preventing any link-related
  • Using the Disavow Tool: Late last year, Gary stated that the Penguin 4.0 update would make the use of the disavow tool unnecessary. He did add, if it will make SEOs feel better, the tool could be used in an experimental way to ascertain impacts on ranking.
  • Algorithm Updates and Fred: We were informed that Google makes about 2-3 algorithm updates a day. Gary asked the SEO community to name any unknown algorithm update as Fred. This was why webmasters woke up to fluctuations in traffic in March this year and it led to different blogs and comments on the possible causes of the algorithm adjustment.
  • Mobile-First Indexation 2018: There were a few questions around Google’s plan to implement mobile-first indexation next year. Gary stated that mobile responsive websites do not have to make any change. He advised that companies with separate desktop sites will need to ensure there is a replica of content on both versions. In addition, webmasters are advised to maintain the ‘rel alternate’ tag and should refrain from adding any ‘rel canonical’
  • Link Building: The question on broken link replacement was put forward to Gary. He enthused that there is nothing wrong with pursuing a broken link building strategy as long as there is contextual relevance. In essence, if we request a site’s owner to replace a broken outbound link on an existing article with ours, there has to be topical relevance. Google will ignore contextually irrelevant links and will refrain from transferring link equity.
  • Accelerated Mobile Project (AMP): AMP is an open source initiative from Google that enables the creation of websites and ads that are consistently fast, appealing and powerful across devices and distribution platforms. In a nutshell, it does facilitate a faster user experience for mobile users. Gary told attendees that there is no rank boost for sites that implement The Google Trend Analyst also added that the search engine giant is looking into websites that utilise AMP but are publishing thin content and then referring users to the full site. Some of these sites implement these redirects to deceptively refer users to advertising served on their actual sites. Google is currently investigating websites with this structure.

Overall, the Brighton SEO Conference afforded digital marketing professionals with the unique opportunity to acquire knowledge across a wide range of topics. The keynote address was a climax to the entire event as it was great to get Google’s point of view on contemporary and conflicting search related topics.


15 steps and tips to creating effective Facebook product catalogue ads

Facebook as an advertising platform has grown over the past few years and the total revenue from ads totalled $26bn in 2016, which is an impressive 57% increase from $17bn achieved in 2015. Running effective social media marketing campaigns on platforms like Facebook has become important to marketers and businesses.

The ACC structure of Facebook ads has matured in recent years to include a variety of campaign objectives that cater for both lead generation, eCommerce and retail businesses. Click To Tweet ACC, in this case, stands for awareness, consideration and conversion. Conversion (lead generation), product catalogue sales and store visit are the three campaign types that can be created at the conversion stage of the customer journey. Our focus will be on providing useful tips in setting up your first Facebook ads product feed. Let’s dive right into the useful steps and optimisation tips.

Steps in creating Facebook ads product catalogue

Creating the product feed is the first step in creating an effective Facebook product sales campaign. The Facebook product feed is a bit similar to the shopping feed on the Google merchant centre but there are certain differences.

1) Visit the product catalogue section of your Facebook ad account or business manager and click on the product catalogue drop-down.

Facebook catalogue sales ads

2) The next step is to create the name of the product catalogue and indicate if they are physical products sold online, travel tickets or holiday destination deals. Additionally, it is quite important to select the appropriate owner of the product catalogue. This issue arises if you have an individual Facebook ad account and are also linked to the Facebook business manager of your agency account. If you make your agency’s Facebook business manager the owner of the product catalogue, you’ll have to be an administrator of your agency’s Facebook business page in order to upload the feed.

Facebook catalogue image

Facebook catalogue ownership

3)The next step in creating your product catalogue will be the completion of the currency information. This is the currency of the actual product and Facebook supports a variety of currencies. You also indicate if the product feed will be a onetime upload or recurring uploads.

Adding product feed to catalogue

4)The next step is to upload your product feed which could be any of the following formats: CSV, TSV, RSS XML or ATOIM XML. For this blog, we’ll be we’ll focus on how to create an effective product feed in CSV.

Sample Facebook product feed

5)You’ll have to create a product feed that meets Facebook’s guidelines. Some marketers make the mistake of replicating their Google merchant centre feed on their Facebook product catalogue account and they end up with a disapproval. It is important to use string (text) in certain fields like ‘product condition’ and ‘availability.’

6)You’ll need to leave the column titles empty for fields that are not relevant to your business. For example, if you manufacture unique and customised products and there is no GTIN (Global Trade Item Number) or MPN (Manufacturer Part Number) available, you can leave those columns empty.

7)The Facebook product feed guide indicates the required fields but creating a simple CSV file with the following fields – ID, title, description, link (product landing page or link), condition, availability, price, brand and Google product category – is needful. The Google_product_category column simply utilises the same taxonomy you use for your AdWords shopping campaigns. The below screenshot gives you an idea on a sample product catalogue in CSV.

8)You do not need to include the shipping cost on the Facebook product feed. This is quite different from what is required on Google or Bing.

9) It is also essential that the price fields do not include the currency symbol or the currency acronym or initial (e. 25.00 and not £25.00 or 25.00 GBP).

10) You should ensure all product images have a resolution of 600 x 600 pixels. Once all of the columns have been completed, you can then upload the feed and Facebook immediately informs you if there are any errors or not. Within a few minutes, your product feed is ready to be used for your new campaign.

 

Tips to optimise your Facebook product catalogue

11) Product events: It is important your product catalogue is linked to you Facebook pixel. This helps you view the product performance against the set of custom conversions. It highlights the products that have sold the most and those with the highest cart abandonment rate. These findings will help you make improvements like adding an attractive offer/discount or tweaking the ad copy of certain products.

Facebook product events

12)Product sets: These are quite similar with product groups on Google’s shopping campaign. Facebook enables the creation of product sets or groups based on category, brand, gender, condition, size or price. Creating a variety of product sets will help improve your campaign performance by targeting a set of audience with relevant products. There are several parameters that can be used to exclude or include products such as ‘contains,’ starts with,’ ‘is not,’ ‘is any of these,’ and a few more. Also, it is important to know that once product sets are created, they cannot be edited; hence, it is important to add all the required products in a set before saving.

13)Diagnostics: It is important this section is constantly checked for any errors that require urgent attention. Some of the common errors you can discover in this section include image resolution issues and Facebook pixel compatibility concerns.

Creating Facebook product sets

14)Products: The products section is quite useful in presenting a complete snapshot of all your products in your feed. You can easily spot products with errors and fix appropriately. There are filters you can use to determine what products are displayed to prevent viewing your entire catalogue.

15) Product catalogue settings: You’ll need to review the catalogue settings to ensure that images are displayed appropriately for dynamic product Facebook ads.

These are the steps to creating a CSV product feed and optimising the catalogue to ensure the campaign delivers a great return on investment. A properly set up Facebook catalogue ensures the campaign creation and delivery are smooth and perform at an optimal level.


How stemming and lemmatisation improves search engine results

Why do you always visit Google or your favourite search engine to seek an answer to a burning question? I assume it is down to the reliability or acceptable accuracy of these search networks in providing contents or rendering websites that meet your need? As humans, it could be somewhat difficult for others outside our immediate circle to understand our thoughts or intention due to our unique upbringing, genes and the impact of our immediate environment. To some extent, those with differing biological and experiential history of us might initially find it challenging to understand or deduce our intent from our language.

Stemming & lemmatisation enable search engines to gain a better understanding of our queries and serve the most relevant web pages. Click To Tweet

Our environment and biological makeup play a critical role to how we communicate as humans but this also breeds a certain level of complexity. It is argued that the human language is considerably different from those of other animals due to two factors. Firstly, human language is believed to contain tens of thousands of inconsistently learned symbols (words) when compared to that of other animals. Secondly, there is complex compositional syntax with human language with the presence of part of speech such as nouns, verbs, adverbs and much more. The understanding derived from our sentences is a product of the individual meanings of the constituent parts or words. Language is quite complex and we are seeing a proliferation of new words, slangs and acronyms in this internet age. A very good example is the word ‘bad,’ could mean ‘good’ or ‘bad’ depending on the age group, location and setting of the user. This is just an example of how complex words could be and a misunderstood word could change the entire sentiment and meaning of a sentence.

This has led to the emergence of the field of natural language processing. Search engines utilise natural language processing (NLP) to train their machines to understand human language and serve the most relevant pages to meet the need of the given user. Natural language processing is defined as the interaction between computers and human natural language or the training of computers to understand human language. Stemming and lemmatization are two key elements in NLP and search engines utilise these two to gain a better understanding of our search queries and serve the appropriate contents or web pages.

 

Meaning of Stemming and Lemmatization

What is Stemming?

Stemming is commonly used in the field of information retrieval and it refers to the process of truncating words to their stem or root form. It is believed that the original word is not expected to be semantically identical to the base, stem or root word.  Stemming algorithms used by search engines handle stemmed words as synonyms and an expansion of the original root version. Here are a few examples of stemming, for the word ‘going’ base form is ‘go.’ In this case, the words are related and likely to have similar contextual meaning.

What is Lemmatisation?

In linguistics, the word lemmatisation refers to a process of combining different forms of a word so they can be considered as a single item.  Whilst in computational linguistics, lemmatisation is a multi-layered process that involves the utilisation of the given part of speech of a word in a sentence to produce an accurate and contextual base word. For example, the verb ‘to dive’ could show up as ‘dive’, ‘dived,’ ‘diving.’ In lemmatisation, the parts of speech and context of words determine their respective base or lemmas.

Examples of Stemming and Lemmatisation with code implementation:

I’ll be using a simple python code with the NLTK (Natural Language Tool Kit) library to illustrate the difference between stemming and lemmatisation.

stemming and lemmatisation in SEO

In the above example, I am looking for the root form of the words ‘paid’ and ‘is’. You could easily guess that ‘paid’ is the past tense of the word ‘pay’ and ‘is’ is a third person singular of ‘be’. Via stemming, the root words could not be identified as it returned the words ‘paid’ and ‘is’ in the original inputted form.  This clearly expresses that stemming functions on a single word without taking the part of speech or context into consideration. It is almost akin to a garage-in-garbage-out syndrome.

stemming and lemmatisation in SEO

On the other hand, the above example clearly indicates that lemmatisation takes the part of speech and context into consideration and returns the appropriate base or root word. For the word ‘is’, the third person singular of ‘be’ is returned and the word ‘paid’ rightfully outputs the root word of ‘pay’.

By way of clarity, search engines utilise lemmatisation to gain a better understanding of a user’s query and serve the most relevant result. They look at the part of speech (example: verb, adverb, noun) of the words within each query and identify contents or web pages with the most relevant root words and serve that accordingly.

Stemming and lemmatisation in search engine results

It is now important to have a quick look at search results on Google and ascertain how the search giant uses stemming or lemmatisation to render the top results. I ran a quick search on how to know you’ve paid the right price for your holiday.

Stemming and lemmatisation in SEO

The goal was to check the top pages and ascertain if the lemma of ‘paid’ (which is ‘pay’) was used more often in the contents or just the actual word ‘paid.’  Based on the top pages, it is quite clear that Google also utilises lemmatisation in understanding a user’s query and rendering results accordingly. Within the top three pages on Google, the lemma or lemmatised root word of ‘pay’ was used more often than the actual typed or stemmed word of ‘paid.’  Below is a good example of the top pages having a more lemmatised base of ‘pay’ than the originally typed word of ‘paid’.

stemming and lemmatisation in SEO

In conclusion, stemming and lemmatisation are two important elements in natural language processing and information retrieval. These enable search engines to gain a better understanding of our queries and serve the most relevant web pages where applicable.

 


How search engines use artificial neural networks to improve search results

Which do you think is better- a computer that thinks like a human brain or a brain that acts like a computer? If you are a bit puzzled as to the right answer, it is a rhetorical question and I am not expecting a ‘yes’ or a ‘no.’ We live in an age of technological advancement as search engines have been built with algorithms that  are designed to act like the human brain. This is achieved via machine learning models such as artificial neural networks. As a point of clarity, this post is not about how computers have become smarter than humans or how they can replace us. It focuses on how search engines have been inspired by how the human brain functions and have developed mathematical models to generate the most relevant search results for users. Before getting into the top level bits on how search engines adopt the principles of neural network, it is important to briefly look at how the human brain works.

How does the human brain work?

The human brain  is comprised of about 100 billion of nerve cells known as neurons. These neurons have the ability to collect and transmit electro-chemical signals. Neurons send messages to each other which help with the optimal performance of the brain. Without going into the basic parts of the neurons, it should be noted that when our body encounters external stimuli like sound and image (light), our input, inner and output neurons are triggered. Neurons facilitate human intelligence as it helps us identify patterns and predict the best result or answer based on previous experiences or knowledge.

artificial neural neural networks

If you are asked to guess the above image, I am quite sure your answer will be a guitar. Whilst it seems like a weird looking guitar, our brain still helps us identify it as a guitar. How does this work? The moment we come in contact with this given image, the surrounding light, in conjunction with the image light, strikes our retina and the information processing begins. Our input neurons receive the data (image in this example), which is then transmitted via the hidden neurons to their output counterparts. With the help of the most relevant hidden neurons, the output neuron is able to predict that the image is a guitar based on the shape, colour, movement, location and spatial organisation (points, lines, areas and volume).  Based on the string, colour, shape and how the object is structured, our brain tells us it’s a guitar and not a piano or saxophone. The implication of this is that the neurons have made the human brain so intelligent that we can make correct guesses of people, places and objects even when they look different from our previous experience or encounter. Let’s assume a friend of yours completely changes their hairstyle or make-up, you are still more likely to rightly guess it’s them. This is based on how your inner and output neurons interpret shapes, colour, movement and a few other elements. This is quite similar to how search engines work as they constantly encounter new or modified queries. This is akin to the example of the guitar above; despite the strange design, the neurons in our brain have collated elements of the look and structure of guitars and correctly guess one even when it does not perfectly look like one we’ve seen in the past.

How search engines utilise neural networks to improve the search user experience

Most search engines have some form of artificial neural network models, which is important in learning clicks from users in relation to search terms and predicting the best results for future searchers. It is believed that the number of queries fed into this model will be instrumental in determining its effectiveness in producing the best search results or websites for users. As users click, the algorithm studies their preferences and reorders the search results page accordingly to ensure the next user finds the most relevant website. Let’s say you ran a search for ‘basketballs London’ and 10 websites are organically generated on the search engine result page. The top two organic results are websites that sell basketball products whilst the third website focuses on basketball clubs. Assuming most users with a similar search all visit the basketball club website and ignore product sites such as Sports Direct., the artificial neural network ranking algorithms of Google will promote the ‘Getactivelondon.com’ website above that of ‘Argos’ and ‘Sports Direct’.

search engines - neural networks

The diagram below expresses a simple artificial neural network for search engines. The first column of boxes is the input neurons which collect the user’s queries. In this example, the user has run a search for ‘world river bank.’ This query is then transmitted to the hidden layer and the most relevant of this layer is labelled ‘hidden1.’ The hidden layer is also viewed as the query layer and it determines the most relevant output by using a list of weighting for the different URLs, largely determined by the previous user’s click behaviour.

artificial neural networks improving search experience

Simply put, most users who had ‘world’ and ‘bank’ in their queries found the ‘World Bank’ URLs most relevant,/; hence the said results are generated for the ‘world river bank’ query.

Neurons are a powerful transmitter of signals and largely account for human intelligence. This has triggered the need for search engines to develop algorithms that are similar to how the human neurons function. As such, are search engines striving to be as smart as humans or just aiming to provide the most relevant results to users? I’ll sway more to the latter than the former.

 

Image Credit: True Fire and Collective Programming Intelligence

 


How JavaScript event handlers can help you understand web user behaviours

Do you ever get curious as to how users interact with your website or mobile app?  I get quite fascinated to understand and analyse user behaviour on web properties.  Drilling down to discover user behaviour on your website has a lot to do with some JavaScript event handlers. This blog will not look into JavaScript code implementation for these event handlers but will basically look at the theoretical significance.

Let’s paint a scenario: you’ve just built a brand new e-commerce site selling skating boards and you’d like to see how you can improve the site performance. The performance in this instance will comprise of metrics such as traffic, average time on site, pages per session and ultimately transactional indicators such as average order value, the number of transactions, quantity and overall revenue.  Achieving some or all of these metrics is of utmost importance to most marketers and business owners. How do you evaluate what colours or section of your web page attracts the most attention from customers? How do you get granular to understand the customer behaviour on your website? Here come heat maps which are powered with these JavaScript event handlers.

Heat maps:

Heat maps are software applications that enable site owners and marketers to track clicks, scrolls, mouse motion, attention and scrolls on a website or mobile app. These applications help to eliminate guesswork from the website redesign and optimisation process.  Some of the top heat maps out there include Hotjar, Crazy Egg, Mouseflow and Zarget.  We’ll now have a quick look at a few JavaScript event handlers that make a granular analysis of web user behaviour possible.

Context of JavaScript event handlers:  

Before writing about the different types of event handlers, it is imperative that a bit of context is provided to how browsers generally register or capture web user behaviour.

Events and DOM nodes:

Without sounding too technical, DOM (Document Object Model) is a concept that explains how browsers retrieve and display the contents of a website.  This concept helps browsers understand the different parts of a website’s page e.g. title, header and paragraphs. Event handlers can be set to track user behaviour in the different section of the web page. A good example would be trying to ascertain if a user clicks on a link in the first or second paragraph of your web page.

javascript event handlers-1

Event objects:

In the previous section, I briefly touched on the point that a web page is divided into different sections and you can analyse what section of a web page attracts the highest user engagement. A common question could be: are users clicking on a link or a button in the second paragraph of a page as against the first? This will help you optimise your page to enhance the engagement and overall experience of your users. On the other hand, event objects provide additional information about the actual user action. It is not just about clicking on a link or a button; it drills down to detect which mouse button (left, middle or right mouse button) was pressed.

Let’s now look at the different types of event handlers.

Types of JavaScript event handlers

Key events: There are a variety of keys on a keyboard and understanding how users interact and run an internal search on your website will help you enhance the user experience. The browser triggers ‘Keydown’ when a key on a keyboard is pressed down and ‘Keyup’ is activated when the key is released. ‘Keypress’ is a third key event that takes place immediately after a key has been released and only applies to keys that produce character input. In essence, ‘Keypress’ is triggered when a user types characters such as alphabets, numbers and special characters as opposed to keys such as backspace, space, tab, up or down.

Mouse clicks: Similar to keys, a mouse button also leads to a triggering of an event in the browser. When you click your mouse, two common events will take place and they are ‘Mousedown’ and ‘Mouseup.’ They are both similar to the ‘keydown’ and ‘keyup’ events and help you understand what links or buttons are of the most interest to users via the mouse click behaviour.

javascript event handlers 2

I used the Mouseflow heat map software to analyse user click behaviour on one of my blogs on PPC. I discovered a user or visitor clicked on the link that leads to Google’s announcement of the removal of ads from the side panel.

Mouse motion: Whenever you move your mouse on a web page a ‘mouse-move’ event is fired. A common example takes place when you hover over a link or button without actually clicking. The image on the previous point highlighted a user clicking a link leading to Google’s announcement on the removal of ads from the side panel. We had a click to a link and two hovers or a mouse move. This led to a hover-click rate of 50%. That is, out of two people who looked interested in finding out more about the article, one clicked and the other continued reading my blog. This is a very important way to ascertain how compelling a website’s calls-to-action buttons are, as more mouse moves (hover) and fewer mouse clicks indicate a need to change the colour, font size or text of a call-to-action button or link.

mouse motion- event handlers

Scroll: The scroll event is fired or triggered whenever a user scrolls your web page. This is quite important to understand if a section of your web page is of little or more interest to site visitors. If most users do not scroll to the bottom or lower section of your site, you might consider having your most important contents at the top half of the given page. My PPC blog below reveals the user scroll analysis. The blue line is the fold (the upper section of a page on a mobile device without scrolling). It indicates a certain number of users scrolled below the fold but not till the end of the page. This is quite useful in site and content optimisation.

Focus event: This is the last event and is triggered when an element or a section of a web page gains more user focus. It is sometimes viewed as an attention element. This helps you to determine if what you considered the most important section of your site is gaining the most attention or focus from users.

event handlers- user behaviour

Are you ready to get rid of the guesswork from your website and conversion rate optimisation efforts? Then the abovementioned JavaScript event handlers applied by popular CRO and heat map tools will ensure you are effectively engaging users at every point of the customer journey.

DOM Image credit: Marijn Haverbeke


Early findings on the new feature on Google’s mobile search result page.

About a week ago, I noticed a new feature on Google’s mobile search results page. Interestingly, Google now adds the ‘People also search for’ option whenever you revert to the search results page on mobile devices. Whilst pondering on this recent change, I came across this tweet from one of the members of the SEO community

google mobile result pageThe interesting point from this tweet was that Steve Hammer was suggesting that this new feature could be as a result of pogo-sticking. I wrote a blog on pogo-sticking a while ago.

By way of clarity, pogo-sticking happens when a user clicks on a link to the search results, visits a given website and suddenly reverts to the search results page to click on a different link or result. It is believed that search engines use this to evaluate the relevance of a website for a given query. The final part of this blog will ascertain if this feature is completely aimed at measuring pogo-sticking or a modified version of pogo-sticking.

Early research findings on the new Google mobile search results page:

  • It appears after “one-back”: The tweet from Steve Hammer initially cited that the ‘People search for’ feature only appears after a user has visited two sites and reverted back to the results page. Based on my personal research, it appears after “one-back.” This means once you visit a site on a mobile device and return to the search results page, you will automatically see this feature.
  • It appears over the Meta description: When a user reverts to the mobile search results page, the recommendation appears right over the Meta description of a given site. It does not necessarily appear in the Meta description of the earlier page visited. mobile search results page
  • Varying numbers of search terms recommended: The number of search terms suggested by Google varies as at times 3 were recommended and at other occasions 8 were put forward.
  • Does not appear on searches made before the new feature regardless of overlap: I ran a search about two weeks ago before the introduction of the recommender. The website I visited was left open on my browser for two weeks. I recently left the website and went to the mobile search engine results page and the new tool did not appear.mobile search results
  • It may not be a way of detecting pogo-sticking: Pogo-sticking is believed to occur when a user visits a site and quickly returns to the search engine results page. I ran a couple of experiments where I visited a site and remained on the site for 1 minute, 3 minutes, 7 minutes and over 12 hours respectively. On all occasions, the recommender surfaced when I reverted to the mobile search engine results page. As such, one could say pogo-sticking is not the sole reason as to why the feature appears on mobiles. Maybe a modified pogo-sticking could suffice.

Above all, this is a very interesting feature introduced by Google with user satisfaction at the heart. It is quite important that site owners continually strive to produce useful content that will prevent users returning to the mobile search engine results page. The world of search engine optimisation is synonymous with change and this is a welcome development in the drive towards ‘search user optimisation’.


Penguin 4.0 algorithm update: key takeaways and optimisation tips

 

The weekend is always an exciting time to catch up on some exciting Premier League games and boxing unification fights. My weekend is golden, a memorable period of the week when I’ll expect more drama from Arsenal’s Emirates Stadium than the world of search engine optimisation.

Surprisingly or somewhat unsurprisingly by Google’s nature, this past weekend, Gary Illyes from the Google Search Ranking Team published a post about the new Penguin 4.0 algorithm update. For almost 707 days, search marketers have been pondering and chatting about the next Penguin update. There have been many speculations and false alarms regarding a Penguin update in the past, but this time around it was an official statement from the search giant that captured our attention. By way of clarity, this post is not about the large flightless Penguin seabird mostly found in countries like Chile and Argentina but it is a Google algorithm named after the famous bird.

What is Penguin algorithm?

The Penguin algorithm was launched in April 2012 and aims to demote the rankings of websites that have gained links to a site via spammy, inorganic and unethical ways. About 3.1% of search queries in English are believed to be affected by the Penguin algorithm. Links pointing to a website are an important way for Google to evaluate the importance of a given site and ranks accordingly. The importance of links in search rankings prompted some digital marketing professionals to employ deceitful ways to gain links to websites, thus negatively impacting the search user experience.

Key points from Penguin 4.0 algorithm update

  • It is now in real-time: Prior to this update, websites affected by Penguin algorithm had to wait for the next refresh to have their websites re-evaluated. For example, a website that has taken the necessary steps to remove or discredit spammy links would have to wait for over 707 (timeframe between the last and current update) days to have the website reassessed and rankings reinstated. The good news is that the reassessment is now in real-time, which means Google is able to restore rankings after the next crawl. As Google crawls websites on almost a daily basis, the chances are that rankings can get back to normal in a few days or weeks as against 700+ days.
  • It is granular and not site-wide: In the past, if a single web page had bad or unnatural links, Penguin could negatively impact the rankings of the entire site. With this update, Google only devalues the rankings of the affected page. In the words of the Google Search Ranking Team: “Penguin now devalues spam by adjusting ranking based on spam signals, rather than affecting the ranking of the whole site.”
  • It is still part of the 200 ranking factors: Penguin is still part of the 200 factors used by Google to determine how it ranks a website. Last November, Google released the 200 search quality rater’s guideline used to determine the relevance of sites.
  • Google will no longer confirm future Penguin updates: Going forward, Google will no longer confirm future Penguin updates, which means search marketers will have to keep a close watch on their link profiles and search rankings to spot any dip or uplift.
  • Penguin 4.0 will devalue and not demote websites: Days after publishing this blog Gary Illyes from the Google Webspam Team shared this about how Penguin 4.0 is slightly different from previous webspam algorithms: “Traditionally, webspam algorithms demoted whole sites. With this one, we managed to devalue spam instead of demoting AND it’s also more granular AND it’s real-time. Once the rollout is complete, I strongly believe many people will be happier, and that makes me happy.” In essence, what this means is that Penguin 4.0 will not negatively demote the ranking of a site as a result of unintentional spammy links. Penguin 4.0 is inherently able to devalue these links and submitting a disavow file may not be necessary. On the other hand, Gary Illyes advises webmasters, that completing a disavow file could still help Google discredit spammy links. Overall, I personally think, it might be necessary to complete a disavow form in extreme occasions when you feel the presence of spammy links could be having a negative effect on your site’s performance. Below are steps you can employ to alert Google about these spammy links.

How to make the best from the Penguin 4.0 algorithm update 

  • Link profile/health analysis: It is important to use tools such as the Moz Opensite Explorer, Majestic and other relevant tools to assess your link profile and detect any suspicious link.
  • Google search console link analysis tool: You can also research about sites that link to your website via Google Search Console. It also affords you an opportunity to see the number of links that originate from each site and the recipient pages. It is a good way to spot any suspicious links and sites that can harm the ranking of your core pages.search console links- penguin 4.0 Google update
  • Disavowing suspicious links: Google has a simple disavow tool that helps you submit links that you feel are spammy. The standard procedure is to contact the site owners to remove these links. If this approach fails, you can submit a disavow file for Google to discredit these links and reinstate affected keyword and page rankings respectively. search console links- penguin 4.0 Google update
  • Server log analysis: Search Console allows you to examine Google’s crawl rate and the number of pages crawled, but not the actual pages. You can use a variety of tools such as Splunk, Sumo-Logic, Loggly or the Screaming Frog Log File Analyser to ascertain the actual pages crawled. I use Screaming Frog Log File Analyser and it is free for smaller sites with about 100 daily crawl events or activities. This process will help you determine if Google has re-crawled a page that had a spammy link. If after a week or two that given page has not been re-crawled by Googlebot, you can use the ‘Fetch and Render’ option in Google Search Console to submit the page to be re-crawled, and any Penguin algorithm de-ranking effects can be re-evaluated. search console links- penguin 4.0 Google updateThe Penguin 4.0 algorithm update is a welcome development in the life of a search marketer. We don’t have to wait for a century for the reassessment of websites. Since Google will no longer update us on future Penguin algorithm refreshes, we will have to keep a closer eye on our websites’ link profiles and rankings.

    Update: Days after publishing this blog Gary Illyes from the Google Webspam Team shared this on Facebook: “

    “Traditionally, webspam algorithms demoted whole sites. With this one we managed to devalue spam instead of demoting AND it’s also more granular AND it’s realtime. Once the rollout is complete, I strongly believe many people will be happier, and that makes me happy. “

     


Is voice search the next SEO destination? Key takeaway from Brighton SEO

On Friday, 2nd of September, digital marketing professionals and enthusiasts came together for the Brighton SEO conference. It was a time to share top tips, insights and brainstorm on the ever-changing world of search engine optimisation. There were different sessions but I opted for the ones on analytics, onsite, voice search and technical streams. Personally, I found the presentation from Purna Virgi on voice search as the most insightful.  It is worth noting that the other sessions had speakers who presented excellent findings on topical digital marketing concepts.

Are you prepared for voice search?

Purna Virgi’s presentation touched on the growing importance of voice search as she revealed that 33% of searches on Bing are via Cortana, Microsoft’s voice search application. According to Purna, voice search has grown in popularity due to technological advancement that has led to an error rate of only 8%.  With progress in machine learning, voice search applications such as Google Now, Siri and Cortana are able to learn accents and improve the user’s search experience. A lower error rate has led to an increased confidence level in these voice applications by users and led ComScore to predict that by 2020, about 50% of searches will be from voice applications or personal digital assistants (PDAs). During the presentation, Purna revealed the top six queries from voice search:

  • Who is Bill Gates?
  • Do my homework
  • When will my package arrive?
  • Where do I live?
  • Where is my wife?
  • Who is my wife?

voice search the new SEO destination

In optimising for voice search, Purna proposed five essential steps:

  • Rethink keyword: Voice search is more question-based and optimising for longer keyword and using Schema Mark-Up will help search engines gain a better understanding of a site’s content and present them in the most relevant manner.
  • Rethinking local mobile: Purna revealed that voice search on mobile has about 3-times local intent than typed-text searches.
  • Rethinking intent: As SEO professionals, it could be daunting to understand the real intent behind the search terms of users. Voice search appears to have longer text and expresses the intent of users better than a standard text search.
  • A chance to rethink branding: Devising a brand name that is easy to pronounce will be important in enhancing the voice search experience.
  • Rethinking creative: Companies should consider reinforcing their creative from website contents to ad copies to be in line with voice searches.

Is voice search the next SEO destination?

There is a drive towards conversational search as search engines make giant strides to personalise results to users. Typing challenges, quick answers and being occupied (e.g during driving and cooking) are three factors that are compelling users to search via voice applications. Search engines such as Google, Bing and Baidu are investing heavily in algorithms that understand not just the search phrases but also the context of these searches. As such, elements such as the user’s search history, location and global search trends have an impact on what web pages are served to users. Voice search will continue to grow but the key factor will be personalisation.

Reasons why voice search will be on the rise:

  • Lower error rate: Search engines will continue to improve their understanding of accents and user context (location, trends and habits) and this will encourage a higher use of voice search.
  • The need for multi-tasking: We are in a very busy world where multi-tasking is on the rise. Typing a query inhibits this generation from being able to multi-task. Voice search is the answer to a world with a love for juggling multiple tasks.
  • Less trial and error: Typing a query in search engines may require a bit of trial and error and tweaking to get the desired web results. Voice search enables users to be more descriptive and clearly express their intent, which aid search engines to serve precise results at the first time of asking.
  • The need for social interaction: Humans are believed to be frailer than other animals. This triggers the need for social interaction and attachment. Imagine being in a new city and asking a stranger for directions to the closest coffee shop. The need for interaction is evident and PDAs (personal digital assistants or voice applications) will help fill that social need and provide relevant answers in our time of need.

Voice search is here to stay and is expected to grow in use within the next couple of years. As social beings, our desire for conversation and interaction will lead us to resort to voice applications such as Google Now, Alexa, Cortana and Siri.  Are you prepared for the voice search revolution?


Keyword Research: Recent developments and groupings in Google Keyword Planner

Search engines such as Google crawl, parse and index websites along the lines of keywords and the respective URLs. Despite the rise of deep learning algorithms and certain experts claiming the death of keywords, keywords are very much relevant.

SEO keyword research is an important part of the content creation process. How would you feel spending hours writing a blog or a web copy that will generate no site visit? I bet this will be frustrating and lead you to question the relevance of content marketing. An in-depth keyword research is usually carried out before contents are created to ascertain search volumes and competitiveness.

Very recently, Google made certain changes to its popular Keyword planner. Google displayed keywords in ranges to low spending AdWords accounts with this example adapted from Search Engine Land.

google keyword planner developments

The keyword forecasting tool is not affected by these changes at the time of writing this blog. Here are some interesting insights about how Google is grouping similar words in its Keyword Planner tool.

Google Keyword planner groupings and insights

  • Keywords are grouped by hyphenated and non-hyphenated variants: I was running an experiment a few minutes ago to ascertain if Google generates the same search volume for a hyphenated word and its non-hyphenated counterpart. The outcome is that a word like ‘bake-off’ and ‘bake off’ have the same search volume of 33,100.

SEO keyword research

SEO keyword research

  • Keywords are grouped by stemming and root words: Stemming, in this case, refers to words that originate from a relevant root word. I ran a search on the Keyword Planner for a root word ‘dive’ and the search volume was 135,000 and afterwards, a search was made for a stemmed word ‘diving’ and it returned the same search volume.

SEO keyword research

As such, getting worked up over optimising a site for the root or stemmed keyword is a bit unnecessary but paid keyword tools like that from Moz make attempts to split the search volume from root words and their stems. Using the ‘bake-off’ and ‘bake off’ example from the Moz Keyword Explorer, it clearly indicates that there is a search volume for ‘bake off’ but none for ‘bake-off’.

As such, getting worked up over optimising a site for the root or stemmed keyword is a bit unnecessary but paid keyword tools like that from Moz make attempts to split the search volume from       root words and their stems. Using the ‘bake-off’ and ‘bake off’ example from the Moz Keyword Explorer, it clearly indicates that there is a search volume for ‘bake off’ but none for ‘bake-off’.

SEO keyword research

  • Keywords are grouped by abbreviations and their full form: Keywords are also grouped based on their abbreviations and full form and a good example of this is UNO and United Nations Organisation. I ran a search for both variants and the search volume was the same. Hence, it is safe to say Google combines the searches for both the abbreviated and non-abbreviated formats.

seo keyword research

SEO keyword research

  • Keywords are rounded up to the closest bucket: The search volumes on Keyword Planner are rounded up to the closest bucket. I ran a search on David Haye and it returned 165,000 monthly searches which are a rounded figure. The next bucket above 165,000 is 201,000 and the actual search volume for ‘David Haye’ is much closer to 165,000 than 201,000, hence the present search result. The team at Moz analysed a massive dataset to realise that Google has around 85 buckets and this formed the premise for the above analysis.

SEO Keyword research

Google Keyword planner is one of the most robust keyword tools in the world due to the pool of data available. The above groupings give you a bit of an idea on how keywords are grouped and search volume aggregated. Keywords are an integral part of your SEO strategy and understanding the process of finding the gem words is not just important but golden.

We use cookies to ensure that we give you the best experience on our website.