Friday, March 29, 2024

6 new Reddit Ads Manager features to simplify campaign creation and tracking

Reddit Ads Manager has been updated to simplify the process of creating and tracking campaigns more efficiently.

The platform now offers six new automated features designed to provide advertisers with easy access to creative best practices.

Why we care. These updates offer advertisers simpler ways to build campaigns and receive creative support, which could be particularly beneficial for small and medium-sized businesses (SMBs) with limited resources.

Smart headlines. This new tool automatically generates a variety of ad copy headlines tailored to Reddit’s unique audience. To utilize it, advertisers only need to input their website URL.

Creative asset cropper. This new tool enables advertisers to crop images to fit Reddit’s ad format specifications, making the upload process simpler.

Lowest cost automated bidding strategy. This new tool is designed to help advertisers maximize campaign results within their budgets by automatically setting an ad cost amount. Initially used by advertisers focusing on Traffic and Simple Create, it’s now available for those with Conversions and Installs objectives.

Average daily budgets. This update automates advertisers’ daily budgets to align with their total spend for the week, reducing the need for manual adjustments to accommodate spending fluctuations.

Updated campaign management. Reddit has enhanced its duplication and bulk edit functionalities to now allow advertisers to duplicate existing campaigns, ad groups, or ads, and use them as a blueprint for new ones. Additionally, the bulk edit tool features a new user interface, facilitating faster edits and optimizations for campaigns.

Reporting improvements to Reddit Dashboard. Reddit has updated its Dashboard UI to enhance clarity for advertisers tracking their campaign performance in real time. The updates offer improved usability, new graphs displaying performance metrics, and additional report filtering options for customized tracking.

Get the daily newsletter search marketers rely on.


What Reddit is saying. Reddit’s EVP of Business Marketing and Growth, Jim Squires, said in a statement:

  • “Every campaign on Reddit is unique, and we’re focused on serving our advertisers’ needs while providing best practices on creative, strategy, and campaign management,”
  • “Through automation, our aim is to make it easy to create the best and most relevant ad experiences for advertisers and users alike. Smart Headlines, for example, is an intuitive tool that all Reddit advertisers can leverage to effectively connect with their audience.”


from Search Engine Land https://ift.tt/OHSAkLv
via IFTTT

Thursday, March 28, 2024

Google did not drop Web Stories from image results

Google has issued a correction from its previous statement in early February that Web Stories no longer showed in Google Image search results, that is not true. What is true is that Google stopped showing the Web Stories icon in the image search results.

Plus, Google Search Console had a reporting glitch with Web Stories in Google Image Search for the past two months.

Correction. Previously, Google wrote, “Web Stories don’t appear in Google Images anymore.”

Google posted a correction today saying, “Web Stories continue to appear in Google Images, just as other web content may appear, but Web Stories no longer appear with the Web Stories icon in Google Images.”

Search Console bug. Google Search Console had a bug during this same time period that did not show any clicks or impressions for Web Stories within Google Image Search. Google posted the bug details over here and wrote:

A logging error prevented Search Console from reporting on Web Stories in Google Images from August 28, 2023 until March 26, 2024. As a result, you may notice a decrease in clicks and impressions during this period for your Web Stories in the Search Console Performance report. This has been resolved and you may start to notice an increase in clicks and impressions, as the data returns to Search Console. This did not reflect a change in actual clicks or impressions, only an issue with data logging.

Why we care. For the past several weeks now we thought Google is no longer showing Web Stories within Google Image Search results. In fact, Search Console data did not report on impressions or clicks on Web Stories within Google Images. But this is not true, it was a miscommunication and now both have been corrected and clarified.



from Search Engine Land https://ift.tt/dIbRsyV
via IFTTT

Google still has not announced a launch date for SGE

Google has not announced a date for when it will fully launch the Google Search Generative Experience (SGE). We know Google has begun testing SGE in the wild with a small subset of US searchers, in fact, that seemed to have gone live today.

But when will Google roll out SGE and AI overviews to all searchers in the US and other regions – that Google has not announced yet.

Google I/O on May 14. There are some rumors circulating that Google may launch SGE, at least in the US and some other regions, on May 14th, during the Google I/O keynote address while Google’s CEO Sundar Pichai is on stage.

I mean, it makes sense to launch SGE at I/O – it is Google’s largest and most notable event of the year.

But again, Google has not told me on or off the record that SGE will ever fully go live, let alone, that it would launch on a specific date. In fact, we had to squash some rumors about this before. Google did once have a deadline of December 2023, but Google removed that deadline and did not launch SGE.

How might SGE launch. If Google does launch SGE in the near future, I think it will be launched in a way that does not disrupt their search ads. I think that Google would launch AI overviews on search results that generally do not contain any ads.

In fact, Google has said the majority of queries people do each day do not show search ads. And generally, AI answers are better for longer tail queries that may not have ads anyway.

Google has to be concerned with how the SGE interface will impact clicks on search ads, its number one revenue generating source. So if Google launched SGE but didn’t show them for search results with ads on them, that would solve that revenue problem.

Google also will likely try not to show AI overviews for queries that are political or generally sensitive in nature.

Right now, there are plenty of examples of where the AI answers in SGE just look really bad. It reminds me of when Google launched featured snippets, there were plenty of embarrassing examples to point out.

So if Google can figure out a way to limit AI overviews and not show them for those categories, Google can technically launch it.

Will users be able to opt out. Right now, SGE is a labs experiment that you can toggle on and off to opt out. But now that SGE is being tested in the wild, those who are in that test group have no way to opt-out.

I spotted complaints today, as Google just started rolling out the SGE live tests, where users want to opt out but cannot.

Why we care. Many in our industry are on edge about SGE launching. Those who buy ads in the Google search results are worried how it might impact clicks and conversions. Those who publish content are also worried how traffic from Google organic search might differ with the launch. And content producers are worried that Google will take their content, serve the answer to the searcher and not benefit with a single impression on their site.

Google has said that they continue to prioritize approaches that send valuable traffic to publishers. Google has also said they are showing more links to sites with SGE in Search than before, creating new opportunities for content to be discovered. But are searchers clicking on those links…

So stay tuned – when we know more about when SGE will launch, and where, and how, we will keep you all posted.



from Search Engine Land https://ift.tt/vZEgGFd
via IFTTT

Pinterest shares algorithm insights as it shifts focus to non-engagement signals

Pinterest provided insights into how its algorithm works while warning of the risks of overreliance on user engagement. The platform emphasized that excessive reliance on engagement to rank content can result in a negative user experience, and proposed non-engagement signals as a solution.

To encourage other companies to follow suit and contribute to “building a more inspired Internet,” Pinterest published a new document titled ‘Field Guide to Non-Engagement Signals’.

Why we care. By having a better understanding of how Pinterest’s algorithm works, brands can identify what metrics they should prioritize in order to secure more visibility for their content.

What are non-engagement signals? Non-engagement signals are generated from two primary sources:

  1. In-app surveys: These give users have the opportunity to provide direct feedback about the platform. For instance, Pinterest may conduct surveys within the app to gather user insights.
  2. Independent assessments of content quality: This tends to be generated from manual labeling.

In addition to balancing engagement signals in content ranking, non-engagement signals enable Pinterest to align with its values. For example, Pinterest’s commitment to inclusivity is supported by non-engagement signals. When users specify preferences regarding body type, hair pattern, or skin tone in their feed, Pinterest can prioritize relevant content accordingly.

Field guide. Pinterest teamed up with UC Berkeley and the Integrity Institute to create the Field Guide to Non-Engagement Signals in a bid to help platforms create a better user experiences over time. Pinterest noted that the purpose of the guide is simply to help platforms make informed decisions when it comes to utilizing Non-Engagement Signals, as opposed to tell them what to do.

Key takeaways. The field guide, which is based on practical industry knowledge, offers several practical applications for product development, including:

  • How to tune for emotional well-being.
  • Using Generative AI to scale content quality signals.
  • Improving user retention.

What Pinterest is saying. Leif Sigerson, Pinterest Sr. Data Scientist, and Wendy Matheny, Pinterest Sr. Lead Public Policy Manager, said in a blog post:

  • “User engagement is a critical signal used by Pinterest and other online platforms to determine which content to show users. However, it is widely known that optimizing purely for user engagement can surface content that is low-quality (e.g., clickbait), or even harmful.”
  • “Our CEO, Bill Ready, explained that if we’re not careful, content ranking can surface the ‘car crash we can’t look away from’, On the other hand, if you ask somebody after they saw the crash, ‘you want to see another one?’’ the vast majority of people will say ‘Goodness no’.”
  • “Non-Engagement Signals are a critical component to ensure we don’t optimize for ‘the car crash we can’t look away from.'”

Get the daily newsletter search marketers rely on.


Deep dive. Read Pinterest’s blog in full or download a PDF copy of the field guide for more information.



from Search Engine Land https://ift.tt/frIomYO
via IFTTT

Google testing new tools for a more personalized shopping experience

Google is piloting new product discovery elements designed to give users a more personalized shopping experience, including:

  • Style recommendations.
  • Brand preferences.
  • Generative AI for product search.
  • Virtual try-on technology.

Why we care. These features offer businesses new opportunities to showcase their products and engage with high-value consumers who are likely to be more inclined to make purchases, thanks to a personalized shopping experience tailored to their specific preferences.

Style recommendations. Google is testing Style Recommendations, now accessible to signed-in U.S. shoppers via mobile browsers and the Google app, aiming to deliver more personalized results. For instance, when users search for specific apparel like shoes, they’ll come across the Style Recommendations section. Here, they can offer ratings with thumbs up or thumbs down, or simply swipe right or left to immediately access personalized results.

How it works. If shoppers aren’t satisfied with their personalized results or would prefer to continue browsing, Google offers the option to rate more items and immediately view another set of results. Additionally, the search engine will remember preferences for future searches. For example, when searching for men’s polo shirts again, you’ll receive personalized style recommendations based on your past preferences and interactions with products.

Managing style recommendations. If you’ve rated items incorrectly or would prefer to not see personalized shopping results, you can manage your preferences. To do this, just tap the three dots next to the “Get style recommendations” section and explore personalization options in the “About this result” panel.

Brand preferences. U.S. shoppers searching for apparel, shoes, or accessories on mobile browsers, desktop, or the Google app can now personalize their shopping experience by selecting preferred brands. Once chosen, options from these brands will appear instantly. To manage preferences, tap the three dots next to the “Popular from your favorite brands” section, and access personalization options in the “About this result” panel.

Google shopping Favorite brands

Image generation. Google is piloting an AI image generation tool for shopping, which is now available to all U.S. users who have opted into Search Generative Experience (SGE) within Search Labs. If you’re searching for a specific item, like a “colorful quilted spring jacket,” simply tap “Generate images” after your search to see photorealistic options matching your preferences. Once you find one you like, click on it to explore shoppable options conveniently.

Virtual try on. Additionally, Google is testing a virtual try-on tool in the U.S. on desktop, mobile and the Google app. When you see the “try-on” icon in shopping results, simply click on it, and you can see what the product looks like on a diverse set of real models ranging in size from XXS-4XL.

What Google is saying. Sean Scott, Google’s VP/GM Consumer Shopping, said:

  • “People shop on Google more than a billion times a day. And thanks to the Shopping Graph, they see billions of products in their results that are constantly being refreshed. In fact, every hour, more than 2 billion listings are updated with the latest information, including pricing, in-stock availability and shipping details.”
  • “So whether you’re looking for a specialty notepad from Japan, a monogrammed handbag from Paris or just a hammer from your local hardware store, the Shopping Graph can help you find it in just a few clicks. With that many options, we’re focused on making it easy to find exactly what you like.”
  • “No two shoppers are alike, which is why we’re designing the shopping experience on Google so it’s tailored to you.”

Get the daily newsletter search marketers rely on.


Deep dive. Read Google’s announcement in full for more information.



from Search Engine Land https://ift.tt/qEndlKy
via IFTTT

Search Ads 360: 3 best practices for advanced PPC marketers

While Google Ads has built-in bidding algorithms and optimization features, some PPC marketers and advertisers still rely on Google’s Search Ads 360 (SA360) for managing complex campaigns.

This article discusses three key best practices and integrations that can help maximize the value of using SA360:

  • Leveraging data integrations with Google Analytics and BigQuery.
  • Taking advantage of advanced bid optimization features like value-based bidding and custom variables.
  • Using templates and feed automation to streamline ad creation and management processes. 

What is Search Ads 360 (SA360)? 

Search Ads 360 is a Google-owned search management platform helping agencies and marketers manage large search marketing campaigns across multiple engines and media channels.

You can use Search Ads 360 to create, manage and report on search campaigns on:

  • Google Ads
  • Microsoft Ads
  • Yahoo! Japan Sponsored Products
  • Baidu

SA360 is for complex accounts and brands, especially retailers who have product feeds with ongoing inventory updates. 

Below are best practices to help you get the maximum value from SA360.

1. Leverage data integrations with Google Analytics and BigQuery

Bidirectional data syncing ensures everything stays consistent. Using Google Analytics conversions to feed bid models establishes a single source of truth across platforms. 

Feeding the data into BigQuery makes analysis more robust. A few use cases include:

Seasonal trend analysis

  • Brands can analyze historical SA360 data stored in BigQuery to identify seasonal trends in search queries and bookings. This insight can guide them in adjusting their ad spend and targeting strategies to capitalize on peak booking periods.

Cross-channel performance

  • By integrating data from SA360 and other advertising platforms into BigQuery, brands can understand the role of search ads in the broader context of their multi-channel marketing efforts, optimizing the mix for better ROI.

Lead quality analysis

  • A lead generation company can use BigQuery to analyze SA360 data combined with lead quality data from their CRM. This enables them to identify which search keywords and campaigns generate high-quality leads and adjust their bidding strategies accordingly.

2. Take advantage of advanced bid optimization features

Value-based bidding helps advertisers focus not just on one conversion event but also on a variety of events (email sign-up, conversion, finding a location, etc.).

Setting up the various values of each event helps the algorithm have a broader set of data and actions to optimize against. In addition, you can create weighting across these various conversion events in SA360 to help create the optimal mix of these events. 

There is also the ability to set up custom variables through the floodlight tags. These include products, location, loyalty programs and other custom variables, which can be very important to businesses. 

Variances that are isolated might not seem like they mean a lot, but when stacked up on top of one another, you can see meaningful differences. 

Take, for example, the profit margin by product. Say you sell a $200 printer and a $200 digital camera. While revenue is $200 in both scenarios, the profit margin of the printer might be 30% and the digital camera only 10%. Therefore, the $200 for the printer is the preferred sale since it drove $60 of profit, while the digital camera only drove $20. 

Other items that have meaningful businesses could be shipping costs that might vary based on the product’s weight or products that generate subscription revenue vs. one-time purchases. These are the types of sophisticated updates and rules you can help set in SA360 that can really make all the difference.

This includes getting sophisticated help for brand-new campaigns, referred to as a “cold start.” Such campaigns try to balance learning mode with not overreaching for a high position and wasteful spend.

While it still takes up to two weeks to get enough data out of this mode, this ‘cold start’ mode does a much better job of getting campaigns started. 

Google specifically states that it overrides the bid strategy’s primary goal, such as CPA, ERS, ROAS, position and any monthly spend amount set for the bid strategy. However, it does not override minimum/maximum bids or campaign budgets.

Dig deeper: How each Google Ads bid strategy influences campaign success

Get the daily newsletter search marketers rely on.


3. Use templates and feed automation to streamline ad creation

There are many businesses beyond just retailers who have a variety of campaigns, products and variations that would save a lot of time and effort by being automated and converted into templates.

The below example for an airline helps show how these variations can be used to create dynamic ads for the consumer. 

When setting these up, you can maximize their use by optimizing the feed. Remember, “garbage in, garbage out.” You must ensure the foundation is solid to fully utilize these automated features.

These templates can be found under the shared library in SA360. The below example shows how the feed should be set up. 

Feed template for airline

Leveraging SA360 for advanced PPC campaigns

Search Ads 360 is a potent tool accessible only through a certified third-party seller. Skilled search marketers can leverage its advantages effectively.

However, like any tool, it requires expertise for maximum value.

Just as you wouldn’t buy a high-priced laptop solely for web browsing, you must understand your business deeply to utilize this tool effectively. Custom variables, templates and data integrations are crucial for success, particularly in competitive markets.



from Search Engine Land https://ift.tt/KQ75q96
via IFTTT

8 steps to maximize success during global site migrations

Migrating a website is a complex undertaking, but when it involves transitioning global sites across multiple markets, the challenges are exponentially greater.

This article provides a comprehensive guide to maximizing success with global site migrations while protecting organic search traffic. 

From the initial planning stages to post-launch monitoring, it covers key actions organizations should take to deeply integrate SEO best practices, preserve URL and content value and streamline the process across international markets. 

Site migrations vs. global site migrations

Migration is any significant domain, content or URL changes that typically happen during a website rebuild, consolidation, domain change or move to a new content management system (CMS). 

There is enough to go wrong with a single site migration, but it can be disastrous when implemented globally without careful consideration.

Often, multinational companies will deploy the new platform in a smaller single market or region to work out the bugs using beta users or allow a small percentage of users to access the website to stress and function test it. 

Before the migration 

Your actions during the initial discovery and planning phases can significantly impact the project. The more you can illuminate and integrate global search requirements into the process during this early phase, the less you will have to fix them later.

1. Consider the breadth and timing of deployment

Early on, you should size up how big of a deployment this will be. Will you deploy across all markets (common with a CMS “lift and shift” and simple branding), or will this be a major technology update and a complete architecture or content refresh? 

Many experienced managers will deploy the new platform in a smaller single market or region to work out the bugs using beta users or a small percentage of users on the website to stress and function test it. 

For a multimarket deployment, timing is very important. Ensure you review the holiday and other seasonal schedules in the impacted markets, as they may differ from the home market.

It is never wise to plan a launch close to yearend or during the summer as many employees have vacation or the entire office shuts down for a week or even a month, negatively impacting resource availability. 

2. Ensure SEO best practices are integrated

Expect eye rolls and visible signs of anxiety from teams when SEO integration is mentioned in migration meetings.

Despite their frustration, emphasizing the benefits of integrating search-friendly factors during the design phase is much easier than adapting later on.

Instead of just sharing the latest SEO checklist or random articles with the team, provide specific tasks that need your expertise. This includes:

  • Template adjustments.
  • Significant content updates.
  • URL changes.

This way, your reviews and insights can be seamlessly integrated into the planning process.

We can’t assume developers will make the site fast and mobile-friendly. Even before mockups have been created, work to enforce code, performance and platform adaptability requirements and the requisite QA acceptance testing.

Not every market has super-fast internet or uses desktops, so this foundational performance must be required for the website. 

Remind them that many local markets do not have the resources for reviews and functional tests. Hence, the more that can be integrated natively into the deployment, the more it can cascade easily into all markets. 

Dig deeper: International SEO: How to avoid common translation and localization pitfalls

3. Identify content edits or new functionality

This is the best time to make sure your wish list for new or modified content for each market is integrated into the workflow.

Share the output of your market-specific keyword research or entity mapping exercises that identified new content or adaptations so the localization and content teams can use them during their editing process. 

During a “lift and shift,” when you move content to a new design, check if any content is too long or too short for the new layout. You might need to add more content or trim unnecessary parts accordingly.

If you need new functionality like hreflang or schema, you must make it known during the planning process so any necessary coding or template adjustments can be made. 

4. Ensure internationalization readiness 

Preparing the new site and templates for internationalization is vital, even though it’s not typically an SEO task. This ensures compatibility with localized content, covering issues like text fitting and handling special characters.

The inability to accommodate this critical set of actions will negatively impact user experience and SEO. The following are some of the key considerations to ensure internationalization:

  • Character encoding: Ensuring the entire infrastructure is Unicode (UTF-8) compliant is necessary to support the special characters found in many languages. 
  • Layout and design: Both elements must be flexible enough to adjust for the expansion, contraction and direction of localized text. I’ve seen cases where, right before launch, there wasn’t enough space for German text or support for right-to-left Arabic. The layout should avoid incorrect breaks between Japanese and Chinese characters.
  • Entity-escaping XML sitemaps: This is one of the most overlooked requirements, especially for those launching in markets with special characters. XML sitemap protocol does not support Chinese, Russian or even French and German special characters, so they must be encoded to replace them with an escape code.
  • Date, time and number formats: When the primary focus is on the U.S. functionality, consideration may not be given to the different local conventions, including measurements and date formats. 

Get the daily newsletter search marketers rely on.


Migration development process

After developing the plan for the plan, the real work starts with the development of technology and content. During this time, the global search team will take a few key actions.

5. Document URLs and content

A common mistake during migration is removing or combining webpages and valuable content, which can hurt rankings and traffic. This often happens when streamlining content and workflows.

The SEO team needs to create a plan to minimize this impact:

  • Catalog URLs: Start with collecting all URLs for every market site and a set of performance attributes. You should use multiple sources to compile the list and data – an extract from the CMS, sitemaps, Google Search Console, GA4 and your diagnostic crawler. 
  • URL decision status: As the content and tech teams work their magic, update your URL list with an outcome status. I typically use Keep/Update/Merge/Remove designations to indicate the action on the page. Any status other than keep will impact the performance of any content on the page.
  • Develop URL value scores: For each URL, generate a URL Value Score (UVS). This is a unique weighting for the URL and its content using multiple data points, including keyword rankings, backlinks, traffic, pageviews and sales. The UVS enables you to make data-driven decisions about the URLs and ensure high-value URLs and their content, are redirected or actioned correctly post-migration.

6. Develop an SEO preservation roadmap

Creating a detailed project roadmap for each language or market version is crucial.

Integrate these plans into the main roadmap so everyone understands how optimization fits their workflow. Document each action, including time, for verification.

Dig deeper: 5 international SEO tips that don’t include hreflang

Prelaunch review

The prelaunch review happens in the days leading up to the actual launch. Some companies call all hands on deck to review the entire site, which circles back to the resource issues we mentioned earlier. 

In the local markets, there may only be time to spot check, requiring the teams to focus on areas with the greatest impact. 

7. Check parent-child relationships

Even simple “lift and shift” migrations can break the parent-child relationships between the main site and localized pages.

This breakage or a delay in the localization process may prevent pages from being available. It also often creates pages that don’t exist or have not been localized, which will cause duplication or indexing problems in the markets.

This is why a detailed URL list and UVS are essential to prevent the loss of critical child-only pages. 

8. Preserve and transfer authority and equity

This is a fancy way to ensure all the redirects are implemented correctly.

Most post-launch triages by SEOs identify incorrect or missing redirects as the main cause of traffic drops.

If you have built equity to a page over the years, it must be passed on to the new page for that content. 

Post-launch monitoring 

You would follow all the standard steps for post-launch checks, but they need to be done on all language and market websites, adding to both the workload and complexity of the verification process. 

Despite exhaustive prelaunch checks, there is a high probability something changed during the launch, especially with localized versions of the website.

Often, templates, entire sections or language versions are changed in the hours before launch that need to be identified and fixed. 

Below are key elements to monitor:

  • Open robots.txt: I have encountered far too many websites that have launched with a correct robots.txt file on the main U.S. website but all the market sites remained blocked. Item one on the checklist is to ensure every version of the website allows access. It is often helpful to share the expected settings to eliminate any confusion. 
  • Validate hreflang: Especially with CMS changes or regional deployments, ensure your hreflang implementation is correct as it accelerates URL detection and reindexing. Google may consider new URLs as duplicates until it processes your hreflang, making this step important for those using hreflang tags. Most SEO diagnostic tools can validate your hreflang implementation while it is doing other post-launch diagnostic crawls. 
  • Submit XML sitemaps: Once the site is open for public use, remove any old XML sitemaps and submit new ones, especially if you have made major changes and changed CMS or domains. If you have consolidated or changed domains, you should submit a change of address notification for each old domain.
  • Update country/language selectors: Often overlooked in the refresh is the country or language selector either in the footer or on a dedicated page. Ensure any new or consolidated websites or URLs are updated. 

Set up your global site migration for success

Global migration is never easy, especially for a resource-challenged organization.

If you can do nothing else, deeply integrating SEO into the workflow early into the process allows it to flow into other markets. 

Monitoring redirects and reindexing high-value pages for all markets allows you to detect problems early so they can be fixed.

The preservation and enhancement of those elements that drive performance should be emphasized to align with development, content and localization teams. 

Dig deeper: How you can deal with decentralization in international SEO



from Search Engine Land https://ift.tt/9iyu1Of
via IFTTT

Wednesday, March 27, 2024

Amazon ordered to publicly share details of ads it serves in the EU

Amazon must now share details about the ads it serves in the European Union through a public library.

The retail giant is being forced to provide more transparency about its ad operations under the EU’s Digital Services Act (DSA) after losing an appeal in the Court of Justice of the EU (CJEU) for a temporary suspension.

Why we care. The creation of an Amazon ads library will offer marketers valuable insights into how the retail giant showcases and profits from campaigns. This will empower them to optimize their ads more effectively for better performance on the platform.

What is the Digital Markets Act (DMA)? The DMA is a piece of legislation introduced in 2022 designed to ensure that large online platforms, called “gatekeepers”, behave in a fair way online to create a fair and open environment for online businesses. Only six gatekeepers have obligations under the DMA:

  • Alphabet (Google’s parent company).
  • Apple.
  • Meta.
  • Amazon.
  • Microsoft.
  • ByteDance.

All six companies, none of which are based in the EU, were required to ensure they fully complied with DMA obligations and submit compliance reports by March 7.

DMA violation penalties. The consequences of non-compliance with the DMA includes:

  • Fines: Up to 10% of the company’s total worldwide annual turnover, or up to 20% in the event of repeated infringements.
  • Periodic penalty payments: Up to 5% of a company’s average daily turnover.
  • Remedies: These can include behavioral and structural remedies, such as the divestiture of (parts of) a business.

Amazon contests DSA requirements. Amazon challenged the requirement to follow the ads transparency rule in the DSA in September last year. As a result, the EU General Court temporarily halted the ads library until the issue is resolved.

Decision reversal. This week, the CJEU overturned the decision to temporarily suspend Amazon’s requirement to comply with the ads transparency provision. The court ruled that Amazon must now adhere to publishing an ads library. While the court recognized Amazon’s concerns about compliance, they emphasized the importance of upholding the intentions of EU lawmakers in passing the law. Delaying compliance could undermine the objectives of the DSA, potentially for several years.

What the CJEU is saying. The CJEU said in a statement:

  • “Suspension would lead to a delay, potentially for several years, in the full achievement of the objectives of the Regulation on a Single Market for Digital Services and therefore potentially allow an online environment threatening fundamental rights to persist or develop, whereas the EU legislature considered that very large platforms play an important role in that environment. “T
  • “The interests defended by the EU legislature prevail, in the present case, over Amazon’s material interests, with the result that the balancing of interests weighs in favor of rejecting the request for suspension.”

What Amazon is saying. An Amazon spokesperson told Tech Crunch:

  • “We are disappointed with this decision, and maintain that Amazon doesn’t fit the description of a ‘Very Large Online Platform’ (VLOP) under the DSA, and should not be designated as such.”
  • “Customer safety is a top priority for us at Amazon, and we continue to work closely with the EC with regard to our obligations under the DSA.”

Get the daily newsletter search marketers rely on.


Deep dive. Read the CJEU’s statement on its decision in full for more information.



from Search Engine Land https://ift.tt/hCWukXa
via IFTTT

Google changes definition of ‘top ads’

Google updated its definition of top ads in its Help Center to better reflect how ads can appear in Google Search

A spokesperson told Search Engine Land that this just a “definitional change” and that it would not affect how performance metrics are calculated.

What are top ads? Top ads are ads that appear above the organic result, and also below organic results for certain queries. As explaiend by Google:

  • “When people search on Google, text ads can appear at different positions relative to organic search results. Top ads are adjacent to the top organic search results. Top ads are generally above the top organic results, although top ads may show below the top organic search results on certain queries. Placement of top ads is dynamic and may change based on the user’s search.”

Why now. Google changed its definition of top ads after testing ads between search results back in October. Patrick Stox, Ahrefs product advisor, technical SEO and brand ambassador, noticed sponsored posts were appearing where the third and fifth organic positions normally run and shared a screenshot on X:

google top ads

Why we care. This update could indicate that Google is progressing with its idea to insert ads between organic search results, as seen in trials last year. This change could potentially result in more clicks for advertisers in areas of Google Search that have traditionally been reserved for organic content.

Get the daily newsletter search marketers rely on.


Deep dive. Read Google’s top ads guide for more information.



from Search Engine Land https://ift.tt/d3tEP6C
via IFTTT

Maximizing your B2B spend: Is account-based marketing worth it?

In B2B marketing, the traditional approach of casting a wide net is increasingly being challenged by a more targeted and personalized strategy: account-based marketing (ABM).

This method focuses on identifying and engaging with specific high-value accounts, treating each as its own distinct market.

But is ABM the right approach for your business?

This article explores the potential benefits and challenges of implementing an ABM strategy, along with key factors to consider when evaluating its suitability for your organization.

Understanding account-based marketing

At its core, ABM is a strategic approach that focuses on targeting high-value accounts with personalized and relevant marketing efforts.

Unlike traditional marketing methods that typically cast a wide net, ABM is laser-focused, treating individual accounts as unique markets in themselves.

The upside: Benefits of ABM

  • Improved targeting: ABM allows digital marketers to tailor strategies to the specific needs and pain points of target accounts, resulting in more precise targeting.
  • Enhanced personalization: Personalization is crucial in today’s digital landscape. ABM enables marketers to create customized content and messaging for each targeted account, fostering stronger connections.
  • Increased sales and revenue: By concentrating efforts on high-value accounts, ABM can lead to higher conversion rates and increased revenue. The personalized approach tends to resonate more with decision-makers, facilitating the sales process.
  • Better alignment of sales and marketing teams: ABM encourages collaboration between sales and marketing teams, ensuring a unified approach towards specific target accounts. This alignment can lead to improved communication and a more streamlined sales process.

The downside: Challenges of ABM

While the benefits of ABM are significant, it’s essential to also consider the challenges that come with implementing this type of strategy:

  • Resource intensiveness: ABM requires substantial time, effort and resources. Marketers need to evaluate whether their team can commit to the level of personalization and attention required for effective ABM.
  • Data accuracy and integration: Successful ABM relies on accurate and integrated data. Digital marketing professionals must ensure their data management systems are robust enough to support the intricacies of an ABM strategy.
  • Longer sales cycles: The personalized nature of ABM can sometimes lead to longer sales cycles, meaning marketers must weigh the potential benefits against the patience required for the strategy to yield results.

Get the daily newsletter search marketers rely on.


A contrarian approach to ABM

The LinkedIn B2B Institute’s “2030 B2B Trends” tackles contrarian ideas for the next decade. Below are key takeaways to think about from the 43-page report: 

  • Most third-party data is unreliable; it’s highly inaccurate and expected to worsen due to the changing privacy landscape.
  • Employees frequently change roles, resulting in an unstable buying committee. Moreover, we tend to buy from the brands we know.
  • While you may be aware that IT professionals will use your product, depending on the company structure, individuals with different backgrounds, experiences and decision-making power may be involved in the purchasing process.
  • Although hyper-targeting is more expensive, this contrarian approach recommends going broader. This doesn’t imply zero targeting; rather, it suggests focusing on category reach to avoid imaginary efficiencies.
LinkedIn 2030 B2B Trends report - ABM

Dig deeper: 6 areas to address before increasing B2B paid media investment

So, is ABM right for you?

The key to identifying whether you are ready for ABM comes down to this.

Unless you can seamlessly pass account information from the top of the funnel to the bottom and then automate the transfer of this information to the sales/business development team, you can’t execute ABM correctly, period.

Ultimately, you must have the capability to exclude or update accounts from all targeting based on their status with the sales team in order to provide the desired experience to the client.

The following notes are also important when considering ABM:

  • Identify high-value accounts: Assess whether your business model relies on a small number of high-value accounts rather than a larger volume of smaller accounts. ABM is most effective when targeting a select group of key accounts.
  • Evaluate resources: Consider the resources available for implementing ABM. Assess your team’s bandwidth, technology infrastructure and budget to determine if you can commit to the demands of an ABM strategy.
  • Analyze sales cycle and lifetime value (LTV): If your product or service involves a complex sales process with longer decision-making timelines, ABM might be a suitable strategy to nurture and engage key accounts over time, assuming you have an LTV that will support “waiting” for the right account.

Where to start?

The best place to start is LinkedIn. It is the go-to platform for B2B marketers.

LinkedIn easily integrates with your marketing automation, CDP or CRM platform, allowing seamless back-and-forth sharing of audiences. These audiences can include email addresses (both business and personal) and account names.

While you can manually upload audiences using a CSV file, doing so may hinder the effectiveness of your ABM efforts.

You can technically add email addresses to all major ad platforms (such as Google Ads, Microsoft Ads, X, Reddit, etc.), but none offer the match rates you can find on LinkedIn, where they typically exceed 70%.

If your company leverages tools like 6Sense, Terminus or DemandBase, you can also target specific decision-makers or influencers across diverse placements and platforms they use daily.

Dig deeper: 7 LinkedIn advertising pitfalls: Where your B2B ads setup might stumble

Deciding on ABM

While ABM offers a potent strategy for B2B marketers, its success ultimately depends on carefully considering resource availability, target account characteristics and a commitment to personalization. These are simply necessary for success. 

Furthermore, the LinkedIn B2B Institute’s report sheds additional light on key considerations, including the challenges of third-party data and the need for a balanced approach, considering the dynamic nature of buying committees. 

With this information, you are set up to make an informed decision. Leveraging the benefits and challenges discussed, use strategic evaluation to determine whether ABM aligns with your digital marketing objectives – then reconsider the question, “Is ABM right for you?” 

Dig deeper: 2024 B2B trends: 6 key areas for marketing success



from Search Engine Land https://ift.tt/LEONj7J
via IFTTT

Tuesday, March 26, 2024

LinkedIn Ads launches dynamic UTMs for campaign tracking

LinkedIn is launching a new solution for monitoring campaign performance without third-party cookies.

Dynamic UTMs, which will be available to all users globally by the end of March, simplify adding custom tracking elements to campaign URLs, improving tracking accuracy.

Previously, marketers had to manually create UTM parameters for their campaigns, but with Dynamic UTMs, this process is automated.

What are UTMs? UTM parameters, short for Urchin Tracking Module parameters, are small pieces of text added to the end of your ad’s destination URL that can help you track information about the source of an ad click. For example, you can add tracking parameters to help measure how many clicks to your website were from a specific campaign or placement.

Why we care. UTMs are essential for marketers to grasp a campaign’s performance, especially in a privacy-focused landscape, as they don’t rely on third-party cookies or IP addresses for ad measurement. However, creating them manually can be time-consuming, inefficient and prone to errors. This solution looks address these challenges.

Supported URL tracking parameters. You can add both static and dynamic URL parameters to your campaigns (see the screenshot below). When adding a URL parameter, you’ll need to add both the parameter key and value.

Get the daily newsletter search marketers rely on.


Deep dive. Read LinkedIn’s announcement in full for more information.



from Search Engine Land https://ift.tt/g4PhbzU
via IFTTT

Turn data into meaningful customer moments by Edna Chavira

In today’s experience economy, customers expect personalized and relevant experiences at every touchpoint. But maintaining that level of personalization at scale is becoming even more challenging with third-party cookies going away.

Join our expert panel to learn how Salesforce allows marketers to unify customer data into a single view – helping you drive true omni-channel personalization without relying on third-party tracking. You’ll see how AI and real-time data activation can help you:

  • Establish a comprehensive, single view of the customer by using Data Cloud
  • Deliver personalized experiences across sales, service, and marketing channels through first-party data
  • Use AI-driven insights to continually optimize and enhance customer experiences

Don’t miss this opportunity to learn from the experts and unlock the full potential of your customer data. Register now to save your spot!



from Search Engine Land https://ift.tt/x8VHJGa
via IFTTT

YouTube issues guidance on when to post content for maximum reach

YouTube issued guidance to creators and marketers on the best times to upload videos for maximum reach.

For news, trends, or content with a shorter lifespan, it’s best to post as soon as possible. However, if you’re sharing evergreen content, the platform suggests posting at whatever time suits you prefer.

Live videos. To maximize reach with live videos, YouTube advises going live when your analytics chart indicates that most of your audience will be available to watch.

Off-hour posting. YouTube Liaison Officer Rene Ritchie mentioned on X that if you post during “off hours,” like the early morning, the initial boost in views might be slower. Therefore, it’s important to adjust your expectations. However, he pointed out that over a few weeks or a month, the total views usually even out, regardless of the initial posting time.

Why we care. This guidance provides valuable insight into easily increasing reach by posting more strategically. However, if your content isn’t time-sensitive, YouTube confirmed that you don’t need to stress about researching optimal posting times– meaning one less thing to worry about in your content strategy.

What YouTube is saying. YouTube Product Lead Todd Beaupre said in a video shared on X:

  • “The way we designed the algorithm, we want to give audiences videos and other content that they’re going to be the most satisfied by. If the audience doesn’t really care whether you uploaded it at three in the morning or seven at night, then the algorithm shouldn’t care either.”
  • “What I encourage creators to do is when they have some of these questions, and they immediately think about the algorithm, replace the word algorithm in their question with the word ‘audience.’ That’s going to give you a better answer because we’re aiming to serve the audience.”
  • “The more you focus on understanding what your audience wants and finding the best fit with them, the algorithm is going to match that.”

Get the daily newsletter search marketers rely on.


Deep dive. Read our guide on how the YouTube algorithm works for more information.



from Search Engine Land https://ift.tt/rSRGZT5
via IFTTT

Mikhail Parakhin, head of Microsoft Bing Search and Microsoft Advertising steps down

Mikhail Parakhin is stepping down from his role as the head of Bing Search and Microsoft Advertising and may be transitioning to a new role within Microsoft. “Mikhail Parakhin, head of the company’s Bing search engine and advertising businesses, will exit those roles and look for a new position, a week after the software giant named Mustafa Suleyman to oversee consumer artificial intelligence work and asked Parakhin to report to him,” Bloomberg reported.

Mustafa Suleyman. Last week Microsoft announced that Mustafa Suleyman, the co-founder of DeepMind, a company Google acquired years back, would now be leading up Microsoft’s AI efforts as the CEO of Microsoft AI. “Mustafa will be EVP and CEO, Microsoft AI, and joins the senior leadership team (SLT), reporting to me. I’ve known Mustafa for several years and have greatly admired him as a founder of both DeepMind and Inflection, and as a visionary, product maker, and builder of pioneering teams that go after bold missions,” Satya Nadella, the CEO of Microsoft wrote.

Mikhail Parakhin. Mikhail Parakhin was suppose to report up to Mustafa Suleyman but it seems like now that won’t be happening. Mikhail Parakhin is stepping down from his role and the head of Bing Search and Microsoft Advertising. An internal email secured by The Verge says:

Mikhail Parakhin has decided to explore new roles. Satya and I are grateful for Mikhail’s contributions and leadership and want to thank him for all he has done to help Microsoft lead in the new AI wave. He will report to Kevin Scott while supporting the WWE transition.

Mikhail Parakhin was leading up a lot of Microsoft’s latest AI efforts around Bing Chat, now known as Copilot and that integration into Microsoft products like Bing Search, Office and other services and devices.

Why we care. With Mikhail Parakhin stepping down, you may expect to see changes to the direction of these efforts in Bing Search and Microsoft Advertising. It is unclear if Mikhail Parakhin will be leaving Microsoft completely or switching to a new role.

Personally, I loved how transparent and accessible Parakhin was with the industry.



from Search Engine Land https://ift.tt/kAzNBHM
via IFTTT

7 tips to improve your website speed in 2024 by DebugBear

A fast-loading website provides a good user experience and helps increase conversion rates. Google also recently updated its documentation to confirm that Core Web Vitals are used by its ranking systems.

Ready to make your website fast? Here are seven tips to help you analyze your site speed and identify potential optimizations.

1. Analyze a network request waterfall for your website

A request waterfall visualization can tell you exactly what happens when opening your website. For example:

  • What resources are downloaded?
  • When do resources start loading, and how long does each request take?
  • How does this correlate with what visitors can see as the website is loading?

This information can serve as the basis for identifying the highest-impact optimizations. You can run a free page speed test on your website to generate a waterfall view.

Request waterfalls provide a lot of detail and can look intimidating. Let’s break down exactly what you need to look for.

Request waterfall view for a website

To interpret a waterfall, look for three key milestones in the loading process of a website:

Unless there are redirects, the HTML document will be the first request in the waterfall. Before the TTFB milestone, no other resources can start loading and no content can become visible. Therefore your server TTFB represents a minimum value for the FCP and LCP scores.

HTML document request and TTFB in a request waterfall

Next, we’ll look for render-blocking requests. These are requests for important additional resources that need to load before page content becomes visible.

In this example, we can see that there are four render-blocking CSS stylesheet requests. Once these files have finished loading we can see the first content appearing in the rendering filmstrip in the top right.

Render-blocking requests delaying the First Contentful Paint

To optimize the FCP you can:

For example, in the example above we can see that the app.css file is over 100 kilobytes large. This can take some time to download, especially on slower mobile data connections.

To speed up requests, you’ll also want to look at what servers the browser is connecting to when opening the page. A new connection is needed for every new domain that a resource is loaded from, and each new server connection takes some time to establish.

You can identify server connections in the waterfall by looking for three small rectangles in front of the main request. These rectangles represent the network round trips needed for the DNS lookup, TCP connection and SSL connection.

Server connections being made for each new domain

Finally, we’ll look at the LCP milestone. If the largest content element is an image this can usually be clearly seen by looking for the “LCP” badge in the waterfall view.

After the LCP image has been downloaded the browser quickly starts updating the page and displaying the image. You can see the LCP metric marked by the red line in the waterfall view.

Lazy-loaded LCP image request in a waterfall view

To make it easier to analyze the request waterfall data, many performance tools like DebugBear also include automated page speed recommendations.

DebugBear page speed recommendations

2. Load the most important content first

When loading a website, less important content shouldn’t take bandwidth away from more important requests.

In the example above, lazy loading is applied to the LCP image. That means the browser won’t prioritize this resource. Once the page starts rendering the browser realizes that the image is actually important and the request priority is changed.

As a result, the image only starts loading late, and other requests also use up network bandwidth at that point. We can see that by looking at the dark blue lines in the request inside the waterfall. The dark blue lines show when response data is received for each request.

Slow-loading LCP resource

To ensure an LCP image is prioritized you can:

  • Make sure it’s not lazy-loaded.
  • Use the fetchpriority attribute to mark it as high importance.
  • Consider using a preload tag to help the browser load the image early.

3. Reduce download sizes of key early requests

Larger files take longer to download, as bandwidth is limited and loading a large amount of data requires multiple network round trips between the client and the server.

For example, this screenshot shows a large CSS file:

CSS Size analysis with an embedded image

AWhen we look into it more deeply we can see that it contains many images that have been embedded as text. That means that loading these images blocks rendering, even though they are not important for the page and may not be used at all.

To reduce file sizes you can:

  • Use modern image formats like WebP and Avif.
  • Use Brotli compression for text content (like HTML, CSS and JavaScript).
  • Analyze your HTML or CSS code to identify embedded images, fonts and data.

4. Compare real user data to lab data

Google provides real user data for most websites as a part of their PageSpeed Insights tool. Comparing this data to the results of their lab-based Lighthouse test can help you better understand what’s happening on your website. 

PageSpeed Insights result with field data and lab data

The lab test result typically reports worse metrics than real user data. That’s because the Lighthouse test uses a slower network connection and CPU than most visitors will have.

Two common reasons your lab testing results are faster than real user data:

  • The PageSpeed insights test is reporting unreliable data.
  • The lab test is hitting a cache while most real users experience slow server responses.

5. Check how your website performance has changed over time

The real user dataset that Google provides based on the Chrome User Experience report (CrUX) also includes historical data, even though it isn’t reported in PageSpeed Insights. Seeing how your website performance has changed over time lets you see when a problem was introduced and identify the root cause.

To view historical Core Web Vitals data for your website you can run a DebugBear Core Web Vitals test and then check the Web Vitals tab for a 25-week trend.

Each CrUX data value covers a rolling 28-day time period, if an issue occurs it will gradually impact your scores over the following four weeks.

25-week Core Web Vitals trends

6. Set up continuous website speed monitoring

If you want to catch regressions (i.e., a change was deployed that had a negative impact on website speed) as soon as they happen you need to set up page speed monitoring for your website.

DebugBear is a monitoring service that provides two types of monitoring:

  • Lab-based testing: Run page speed tests on a schedule in a controlled lab environment.
  • Real-user monitoring: See how your visitors experience your website.

Setting up monitoring for your website will alert you whenever there’s a regression and then compare the data before and after to identify the cause of the slowdown.

Dashboard with scheduled website monitoring tests

7. Look at metrics beyond load time

Website performance isn’t just about the initial loading speed as measured by the LCP metric. Google also considers how quickly a website responds to user interactions, as measured by the Interaction to Next Paint (INP) metric that became a Core Web Vital on March 12.

While LCP mostly depends on what’s loaded over the network, INP looks at CPU processing and how long it takes before the page can process an interaction and update the UI to be ready for the next interaction.

Measuring INP requires user interaction, which makes it difficult to test in a lab environment. There are some lab-based INP testing tools like the INP Debugger, but they can’t identify all possible user interactions or tell you which elements users interact with most often.

INP Debugger test result

To improve Interaction to Next Paint you need real user monitoring (RUM) data. This data can tell you:

  • What pages have slow INP?
  • What elements users are interacting with?
  • What scripts are contributing to delays?
DebugBear real user monitoring data

Conclusion

To improve your website speed you first must understand what’s slowing it down. Start by running a free page speed test.

A website monitoring tool helps you keep track of Core Web Vitals over time and get notified of regressions. You can start a free 14-day trial of DebugBear here.



from Search Engine Land https://ift.tt/IDphZzC
via IFTTT