Paraphrase tool free – Paraphrase Online https://www.paraphrase-online.com/blog Creative Writing Blog Mon, 04 Apr 2022 06:10:42 +0000 en hourly 1 https://wordpress.org/?v=5.0.16 The main sins of inhouse marketing https://www.paraphrase-online.com/blog/paraphrase-online/the-main-sins-of-inhouse-marketing/ Mon, 29 Nov 2021 07:00:22 +0000 https://www.paraphrase-online.com/blog/?p=1426 Continue readingThe main sins of inhouse marketing]]> I warn you, dear reader, before you go deeper into this text, that it may be harsh in pronunciation for you in some way and change the standard understanding of marketing work in a given company. This applies especially to entrepreneurs employing many people for marketing departments, as well as marketers without specific areas of activity. I am happy to discuss it, but it is also not my intention to teach anyone. However, I believe that it is worth opening this type of thread, and changing your thinking in this area can help someone.

My conclusions are based on cooperation with nearly three thousand clients that we have had or have had the pleasure to run from the very beginning of
Paraphrase Online. As an organization, we meet dozens of business owners every day and exchange hundreds of emails with people responsible for marketing. We also know everything about budgeting in most industries and about the ratio of marketing expenses to revenues in general. Ultimately, we have a preview of how certain types of activities work, and how others hinder scaling or the ordinary communication flow.

Sin # 1: Marketing specialist

The proverbial marketing specialist is a dinosaur who does not know yet that an asteroid is flying towards him. Once upon a time, a marketing one man show was possible. The competition was smaller, advertising and analytical systems easier to use and offering very limited possibilities. Ba! There were simply fewer advertising channels we needed to know about. Therefore, in many companies, several people could take care of everything and possibly replace each other. Also, hiring someone was much easier due to relatively general market demands. Someone obtained approval from the Management Board, someone launched a prom on FB with a discretionary budget, someone else ordered leaflets or started cooperation with an influencer. There was usually no time for analytics in such a system. Everything was going forward anyway, so for several years no one asked any questions, and marketing departments were covered in a spider web of stagnation.

It ended, however.

Today we are dealing with a completely different battlefield. Every team member must specialize, be well paid, and be good at what they do. It does not have to be a social media ninja or a content visionary of the decade. However, he must be a solid craftsman, which – contrary to appearances – is really difficult to do.

Everyone wants to have everything “here and now” without a minimum of persistence and without working on themselves. Finding people who know Rome wasn’t built in a day is insanely difficult.

Given the current complexity of marketing tools, it is currently not possible to be good at everything. You have to test, search, read and experience to get to know a given platform inside out. And sometimes this is not enough, because there is, for example, an algorithm update or a new company policy and … you have to do it all from scratch, and using old schemes will result in reduced effectiveness.

But why is it necessary to be good at something?

Because there are more and more companies like yours on the market. Because advertising budgets are getting higher and there are more of them. Because the number of advertising space or places does not increase as much as the number of companies that want to develop them. This implies competitiveness that requires optimization of its activities. If we rely on the same behavior for several years, we will consistently lose reach among potential customers.

Sin # 2: Budget

We often come across a situation where a company hired someone to handle Google Ads, with a payout of $ 5,000, spending $ 3,000 or $ 4,000 on the advertising budget alone …

In the case of a paid channel like Google Ads or Facebook Ads, it’s completely absurd. It is not possible to hire a specialist for $ 5,000 who, with such a low budget, will become profitable by optimizing such a campaign. Someone with such skills simply has to earn and spend more in the campaign.

With this type of relationship – where the cost of a person for paid channels is, for example, 50% of the company’s total marketing expenses, it is much better to shift the budget to a proven agency or freelancer. Quite simply – a sensibly targeted budget of $ 3,000, say $ 7,000, instead of $ 3,000, will do a lot better for most industries. Servicing this type of budget in marketing agencies is between 20 and 30% of this sum, so you will close to $ 9,000 for the whole, but you will spend twice as much on pure marketing than if you hire an in-house specialist.

Based on conversations with companies and taking over accounts that were managed in this way, I can guarantee that only then you will feel the real power of paid channels (of course, the budgets given here are hypothetical, because they largely depend on the industry).

Sin # 3: We employ the wrong places

Currently, no company can afford a situation where it relies on only one advertising platform. In the case of giants such as Facebook or Google, we must be ready sometimes for the fact that, for example, the possibility of advertising certain products within them will be limited overnight, as has been the case recently, e.g. with masks or sanitary devices.

There are also topics – such as underwear or dietary supplements that can block accounts for up to several weeks. It happens that even advertising accounts in neutral industries are suspended for a week or two, because the algorithm decided so, and the support was just lazy.

Your customers are also present in more than one channel, and marketing must make sure that you meet them wherever possible, not just where… we consider it best. In the case of outbound marketing, the more often they encounter your homogeneous message, the greater the probability that they will finally decide to buy your product or service. Shopping is definitely more impulsive than measured from top to bottom.

That is why most businesses must diversify the channels of reaching a potential customer. Depending on what we do, what budget we have and what we want to achieve, there is a more or less sensible methodology for selecting people for work. Basically, it can be presented simply by answering the question of who is worth having in the company, and what topics are better to be sent to external entities.

I know a company that has an advertising budget of $ 40,000 per month. At the same time, it employs:
– A social media specialist who runs a FB profile and deals with superficial advertisements.
– A marketing specialist who runs instagram and pinterest, orders printed materials and collects invoices from various subcontractors.
– The second marketing specialist who coordinates the filming and basically also acts as a graphic designer.
– SEO specialist who is also a copywriter, but does not have a separate budget for reasonable linking.
– Marketing Manager who embraces them all and reports them higher.

The total monthly cost of maintaining these employees, in this arrangement, is 38,000 dollars. Therefore, monthly, this company spends ~ $ 80,000 on marketing, of which only half of this money goes to advertising channels. Of course, organic work on Insta, FB, or Pinterest can be worth a lot of money. In general, however – the people from the example – do not even know if what they are doing works …

In such a case, it would be wise to reduce employment by one or two people and leave strategic resources within the company, and release the rest of the funds to additional channels that can only be selected after a meaningful audit of the activities carried out so far. You can also decide on a new division of competences and send specific people to specialized training. In this example, they should probably start with a meaningful web analytics course to delineate further axes for acquiring new competences.

In conclusion, in most cases, it will be a good idea to have a mix that will accommodate both internally employed specialists and agencies that can provide selected services. What is most important in this system, however, is the fact that in the end the costs of servicing and maintaining agencies and specialists do not exceed 30% of the company’s total marketing budget. If we hire for a company, remember that it should give us a specific skill due to its presence. The fact that we add an additional person to the project does not mean that the project will speed up and will have a positive impact on its effectiveness. It could, in principle, be the other way around.

]]>
Core Web Vitals https://www.paraphrase-online.com/blog/seo/core-web-vitals/ Mon, 25 Jan 2021 06:30:43 +0000 https://www.paraphrase-online.com/blog/?p=1106 Continue readingCore Web Vitals]]> The internet is constantly changing. The websites from its origins barely resemble those that accompany us today. Not only their appearance and the form of providing information have changed, but also our expectations. The differences also include the way the website is assessed – today it is not enough that it simply IS. In this context, we need to consider what Core Web Vitals are and what they actually tell us.

Core Web Vitals – website quality comes first

For a long time, Google has placed great emphasis on the speed and readability of the presented content. One of the first elements of these changes was certainly the shift towards users of mobiles devices (also known as “mobilesgeddon”), which in 2015 electrified SEOs and web developers. The next ones include the need to ensure the security of the website and its users, or SSL. Now it is turning to user experience.

In May 2021, page quality will become part of the Google algorithm and will have an impact on the site’s position in the search engine ranking. According to official sources, the quality of the website will not matter more than the content, but it will allow you to promote websites that convey them in a manner tailored to the user’s needs.

The quality of the website will be analyzed in terms of several factors mentioned earlier:
– adaptation to mobiles devices (mobiles-first indexing for all websites from March 2021);
– browsing safety – no harmful and misleading content, i.e. malware and phishing elements;
HTTPS;
– no full-screen ads that prevent access to website content.

In addition, there is also Core Web Vitals, known in USA as Basic Internet Indicators. The pointers are designed to help you gauge the user experience of page loading and availability during and after rendering.

Google emphasizes that work on the method of testing the quality of the website will be continued. In the coming years, the list of factors taken into account will certainly change and will be expanded with new points. We will be informed about the changes in advance, the search engine will also give us time to adapt our websites to the new requirements.

Basic Internet Indicators – let’s take a closer look at them

One of the more controversial seo-themes is website speed. In specialist groups, more than once or twice, we have witnessed long discussions about how a high PageSpeed Insights score improves a page’s position. It is true that the site’s rating in the tool does not translate directly into search engine rankings. It is known, however, that a website that has no problems with displaying its content correctly even with a weaker Internet connection will be welcomed warmly by users. This, in turn, will translate into the length of the visit, the willingness to share the link and the chance to achieve the conversion we set.

The research carried out by the tools I mentioned in the page speed article checks specific elements of the site, but does not provide full information about the experience of a less-one user. Their analysis is necessary, but it focuses on aspects of the website other than Basic Internet Indicators.

Core Web Vitals allow us to define the experiences that our website users experience. The data allowing to determine the website quality level is collected based on the actual visits to the website. Thanks to this, the behavior of the website is examined on a wide cross-section of cases – different speeds of network connections, sizes and technological capabilities of devices.

Core Web Vitals in the current version focuses on three elements:
– the largest rendering of the content,
– delay on first action,
– collective shift of the layout.

As I mentioned before, Google is planning to expand its website quality analysis and we can expect that this list will expand in the coming years. Google does not exclude the possibility of modifying current indicators, e.g. extending the scope of research.

LCP – Largest Contentful Paint – Largest content rendering

LCP determines the rendering time of the largest fragment of the page visible in the browser window (everything we see before scrolling). The elements that are taken into account are:
– graphics,
– video,
– an element containing background graphics, downloaded using the url (…) function in CSS,
– block elements (e.g. <p>, <div>, <ul>, <ol>, <hx>, etc.).

If the content of individual tags extends below the scroll line, these fragments are not taken into account in the analysis. In the case of graphics, the analysis is based on the smallest size reached by the element. If the graphic is rendered smaller than the values specified in the tag during rendering, its rendered size will be crucial for the analysis. When the graphic is stretched, the size of the sample taken for testing will depend on the values specified in the CSS.

How do you know which item is the largest? Web pages render in stages, and the HTML code is read in order – from the first line to the last. Therefore, as your site renders, the object marked “largest” will change. A given fragment of the page is defined as the “greatest rendering of content” only when it is fully loaded. Note – If the user moves the screen while rendering the page, further changes to “largest contentful paint” will not be taken into account.

What about items that load off-screen and only later appear within the user’s view? In most cases, these extracts will not be reported. In the opposite situation – when they render within the viewport and are pushed below the scroll line, they will still be taken into account and may be included in the final result.

Highest Content Render – Score

In PageSpeed Insight, values are provided in two ways:
– as a percentage, collected on the basis of data collected from users in the last 28 days. LCP was satisfactory in 39% of the cases,
– in seconds, based on tests performed during the page load simulation.

PageSpeed Insights analyzes data on the specified address. For more detailed information about pages within the entire domain, you may want to check out the Google Search Console and one of the newest tabs labeled “Basic Internet Indicators”. There are two reports available on the page displayed on mobiles devices and on computers.

The report in GSC lists the pages where there is a problem with poor LCP results. In a situation where the tool has identified the same problem on many subpages – the table contains one address and an annotation about the number of pages on which the error repeats. The exact addresses are available in the extended view, available after clicking on the record.

Poor LCP Score – Effective Ways to Improve

The key to a good LCP score is the optimization of points that slow down the performance of the website. Slow server, the need to execute JS and CSS code, which blocks further rendering of the site and client-side rendering – these are the most common causes of poor results. Let’s take a look at a remedy for the most common problems:
– slow response from the server – the key value here will be TTFB – Time To First Byte (time from sending the query to receiving the first byte of the response). Slow response is reported e.g. in PageSpeed Insights, it can also be checked with other website speed analysis tools.

The primary way to improve the TTFB score is to optimize the server and eliminate processes that degrade its performance. For this purpose, it is worth looking at the advice of your hosting provider or using the help of a specialist.

– JavaScript and CSS rendering blocking – here are the tricks that are recommended for every website speed optimization – CSS minification, removing unused fragments and considering asynchronous loading of style files. In terms of JavaSript, the optimization will be similar – minification of JS files and removal of unused fragments.

Optimization of other elements, e.g. graphics compression, use of CDN and prioritization of loading certain resources, may also affect the improvement of the LCP score.

FID – First Input Delay – Delay on first input

FID refers to the interactivity of the website – it defines the time between the first action performed by the user and the moment when the browser starts handling this action. Clicks (in the case of mobiles – taps) and button presses are taken into account. The FID score focuses on the analysis of response time, but does not examine the processing time of a user-initiated event.

The lag on the first action will not always be measured – some users choose not to interact with the page while it is loading. Interestingly, in the case of this indicator, the analysis takes place only during the actual use of the website – similar studies are not carried out during the simulation performed by Lighthouse.

Every Internet user has certainly encountered a situation in which he immediately wanted to move to another place after entering the website. However, clicking on the link or expanding the rest of the text was not possible immediately – the site’s reaction to our traffic was significantly delayed. Why is this happening? While the site is rendering, the browser is busy handling the files that make up the page. Only after completing this process is it able to respond to user behavior. The waiting time for the interactivity of the page translates directly into the subjective assessment by the user and determining it as “slow” or “fast”.

When describing the LCP, I mentioned that HTML is loaded line by line – so why is it not possible for the loaded fragments to be active earlier? Some tags, such as <input>, <a>, or <select>, require rendering of the main thread to be closed. As for the other obstacles – the <head> section of the .html file usually contains links to CSS and JS resources that the browser must parse before continuing to render the page. Hence the previous suggestions to minify these files (i.e. remove whitespace) and move to the rest of the code fragments unnecessary for rendering the first view. In the case of events that are registered by “event listeners” placed in JS, their operation is possible only after all the code has been executed.

Delay on first action – result

A satisfactory result is in the range of 0-100ms. Above this time it is worth introducing corrections, and above 300 ms – you have to.

The information from PageSpeed Insight is based on user input – in this case 82% of the cases are satisfactory.

More detailed data, along with data on specific addresses that need improvement, are included in the “Basic Internet Indicators” tab in Google Search Console.

Poor FID score – effective ways to improve

Ideally, the tools would report 95-99% of “green responses”. If our current results are not that high yet, making changes to the site will certainly do them good.

As in the case of LCP, first of all, it is recommended to optimize the code necessary to load the key elements of the website and move fragments with a lower priority to the rest of the file. It is also worth removing unused fragments and minifying.

The main reason for low FID results is usually the JS code. Optimizing files and breaking individual fragments into smaller, asynchronously executed batches should significantly speed up the rendering process and reduce resource locking.

Code coming from outside the website will also affect the browser response speed – limiting it will reduce the number of processes performed by the browser.

CLS – Cumulative Layout Shift

The CLS score is the sum of the changes due to an unexpected page layout shift. The change counts when the element changes its position at the time of rendering. CLS does not count as new elements appearing on the page or resizing existing ones, as long as they do not cause other page components to move. Contrary to the indicators listed above, CLS collects data during the entire visit to the page, not in the first moments of its rendering.

Jumps in the page layout are created, among others by asynchronously loading resources or by dynamically adding additional elements to the DOM structure. Such situations are noticeable, for example, when using pages on which graphics are loaded without the declared size – matching the appropriate size to the screen size may cause a sudden change in the appearance of the page, which causes the user to be moved to another part of it.

The resources downloaded from external sources, built on the basis of values other than the target page, and personalized content, generated without proper formatting, may also be problematic here. Unexpected offset can also be caused by extended font rendering.

Not every shift is bad, depending on the page design, some user actions will cause changes from the original layout of the elements. It is important that this type of shift is directly related to the action performed.

CLS is based on data collected from users and allows you to determine the actual experience on the site, which is not always available under the conditions in which the website developer works.

Cumulative Layout Offset Result

A good CLS score is in the range of 0-0.1. A score above 0.25 means that you need to make changes and improve the experience of its users. It is assumed that the website is stable when at least 75% of users receive a score below 0.1.

Data on the shift can be found in PageSpeed Insights. The percentage score is based on an analysis of user data – in this case, 90% get a good score. The PSI also provides information collected under laboratory conditions. Detailed information on more subpages can be obtained from Google Search Console.

Poor CLS Score – Effective Ways to Improve

A better CLS score can be achieved by adhering to general website building standards. The precise description of graphics and videos plays a significant role here – adding to them attributes related to the size of the element (width, height and the aspect ratio associated with them) makes it easier for the browser to correctly arrange the elements during rendering, without the need to change while loading subsequent pieces of content. In the case of dynamic added graphics, it is recommended to use placeholders that reserve space for the element added later.

For some time now, PageSpeed Insights has recommended the use of a css font-display, which is responsible for keeping the font on the page. Another possibility is to load the font in advance using the preload attribute placed in the <link> element. Preload causes the selected resources to be downloaded before rendering the page. Thanks to this, the problem of downtime and waiting for all files from the first part of the queue to be loaded is eliminated later in the page display.

Basic Internet Indicators – UX Matters!

If you look at the solutions that will have a positive impact on the score of individual indicators, it is easy to see that they are not new. Many of them were already recommended by Google during the first versions of PageSpeed Insights. However, indicators and their entry into the search engine algorithm are a good reason to take a closer look at your websites and actually make changes that will make it easier to use the websites.

]]>