Content Shifting, also known as Content Jumping, happens when elements of a web page change height while a page is loading. This can be disorientating to users, and in extreme cases can cause problems if a user clicks on an element as it moves, causing undesired input. It’s also expected in 2021 that search engines will begin penalising content that exhibits content shifting.
To avoid content shifting, the element’s height would be set regardless of the dynamic content it contains.
When using Vue or React JavaScript frameworks, I’ve found that CSS parameters are useful. In my example, a Vue app loads dynamic content and displays it. Unfortunately this causes content shifting.
To avoid this, I pass the number of rows in the HTML/view:
In my CSS/SASS stylesheet, I then use this parameter in a calculation. There are two columns, so the number of rows is divided by 2. It is then multiplied by the height of a single row, and a minimum height is set for the container so that it will not shift:
I’ve used Twilio for a while for programmatically sending and receiving SMS messages. There’s also a visual editor called Studio that can be used to make call and message flows:
It can be connected to Twilio Autopilot to make AI-powered bots. Tasks are trained with sample phrases. These sample phrases are variations on what would be said to trigger an action e.g. ‘Call reception,’ ‘Front desk,’ ‘Talk to a human.’
An example that comes to mind, is making a call handling system for an office. Rather than a voice menu that details each option followed by a number, the caller could simply say who they wanted to talk to or what their request was about, and the system would handle it. This is far more respectful of the caller’s time compared to having them listen to a long list of choices.
It works with SMS and voice calls, and seems a good way to build an IVR (Interactive Voice Menu) system. TwiML can be used for more complicated tasks, while still using Studio/Autopilot. The pricing is a little higher than if you were to use a self-hosted system, but there are so many complicated functionalities it seems well worth paying the extra, as it would save time and reduce complexity.
Using an ESP-32 board with an embedded E‑Paper display, I created a gadget that shows status information from my web server.
E‑Paper, also known as E‑Ink, only needs power when being updated, and uses no power between updates. This means that the gadget can be powered for weeks from a rechargeable battery.
The purpose of this gadget is to put on my wall or desk, and show regularly updated important information on my web server, to keep informed of web site problems and statistics. The information displayed can be easily changed, for example to the latest weather, news, currency prices or anything that can be accessed via the internet. E‑Paper means it uses a very small amount of power and heat, compared to a computer display or television.
You can view my code on GitHub if you are interested in making your own.
For a long time I have wanted to build a remote controlled robot car capable of being controlled via the Internet, at long ranges using 4G/LTE cellular connectivity. So I did.
Building the robotChassis with four motorsChassis with four motors and top section attachedCompleted robot Completed robot
The robot is capable of connecting to the Internet using Wi-Fi. I was able to slightly increase the effective Wi-Fi range by using a Mikrotik router and altering the hardware retries setting and frame life settings. The intention was to quickly recover from transmission errors and avoid congestion. This discarded video packets that could not be delivered in real time, and kept the network clear for when transmission would be successful. I also used iptables and mangle to alter the DSCP of the live video stream packets with the same intention.
To enable a long range connection, I used Twilio Programmable Wireless to connect to local 4G/LTE cellular networks. I substantially lowered the data rate to around 250 Kbps to make transmission more reliable and reduce costs, and was able to get a virtually flawless live feed.
Twilio Wireless Internet of Things Starter PackMonitoring 4G/LTE data usage with Twilio Programmable Wireless
The live video and audio stream uses FFMPEG for compression and streaming, and has a plethora of settings to tune. I took the time to tune bitrate, buffering, keyframe interval. I also ensured the web camera was able to natively encode video with UVC at the selected resolution to reduce the load on the Raspberry Pi’s CPU. Video latency was often under a second, which is impressive especially considering the round trip involved.
Robot remotely controlled via the internet
The control system uses Let’s Robot (now Remo.tv), based at Circuit Launch in California, which has a community of robot builders who love to create and share their devices. The programming language of choice is Python, and I also linked to an existing API I had created in JavaScript with Node and PM2.
Mission 1
Mission 1 — 30 minute Night Voyage
The first 4G/LTE long range mission was successful, and the
webcam was good enough to be used at night. Different members of the community
took turns to drive the robot. It didn’t always drive straight, so we had to
drive forward and turn to the left at regular intervals. The robot drove for around
30 minutes, and then got stuck when it fell down a sidewalk. I had to quickly
drive to retrieve it =)
Mission 2
Mission 2 — Involved Drama
The second mission was intended to drive from my location to
a friend working at a local business. However half way through the mission, a
suspicious member of the public grabbed the robot, threw it in a trash can, and
called the police. I waited for the police and calmly explained that the robot
was an educational project in telepresence, and also told the person reporting
the robot that there were no hard feelings, despite interfering and damaging my
personal property.
Police!
Mission 3
As part of the community site, it is common to leave your robot open to be controlled. While unattended, a sneaky individual drove my robot into a void of the house and managed to get it covered in spider webs and other filth, as you can see below. Thanks.
Covered in cobwebsVery dirty
I found that cats were very curious about the robot invading their territory, as you can see below:
A curious cat investigates the robot
I was very pleased with how the project worked, and had the opportunity to use Python, Node, and fine-tune wireless networking and live video streaming, and of course remotely control the robot as I had wanted to do for a long time.
If you want to build your own robot, the guide to ‘building a Bottington’ is a great place to start.
Update: Twilio saw this post and gave me a $20.00 credit. Thank you 😁
Here are a few important ways to speed up page loading times, together with the improved recorded times for comparison on a typical WordPress web site. While WordPress is hardly an optimized web application, it does benefit from the same speedup methods as most web applications.
I used Google Chrome Developer Tools to time network transfers and page load times. There are various web-based tools available as well:
Using compression on network transfers can greatly reduce file sizes, especially for text-based files such as HTML, CSS and JavaScript. The CPU overhead on modern servers is negligible, and can be cached if required.
PHP Opcode cache — 1.299 sec (TTFB 0.124 sec)
PHP scripts are typically compiled to bytecode on demand. By caching this complication with OPcache or APC, page load times and server load can be significantly reduced. APC did include a fast key/value cache, which has now been replaced by APCu.
Nginx is able to use a fast memory/disk cache to cache requests to PHP-FPM, further reducing page load times and server loads. This can be very beneficial on web sites with high load.
There are many other ways to speed up page load times, including dependency concatenation and minification and image optimization. It is also important to optimize client-side JavaScript to allow the user’s web browser to display content quickly.
AnyCast DNS
An initial visit to a web site requires a DNS lookup. Traditionally DNS has no way to send requests to the geographically closest server, but this is possible with AnyCast DNS. This feature is available on many providers including Amazon’s Route 53, Google’s Cloud Platform and Microsoft Azure. It functions by allowing multiple servers distributed throughout the world to have the same IP address.
By using AnyCast DNS, I was able to reduce an initial DNS request from 93 milliseconds to 18 milliseconds. Combined with having an optimized web server geographically close, even an initial visit to a web page can be displayed instantaneously.
Before AnycastDNSAfter AnycastDNS
Conclusion
Subtracting the round trip time to the server of 0.116 seconds, these optimizations reduced the effective Time To First Byte to 3 milliseconds. On a busy server, these optimizations will make a significant difference to the capacity of the server.
The general push to use SSL/HTTPS for every web site is improving security and privacy on the Internet. However, every request a web site makes will need to be secure, or browsers can remove the ‘Secure’ indicator, show a warning symbol, and sometimes pop up errors.
You can add a simple header that will tell browsers to report back to your server if any insecure requests are made. I combined this with a simple PHP script that logs to the server’s error log. This alerts me to sites I host and develop that have insecure content, so I can fix them.
Step 1 — Add the Content Security Policy reporting header
Now, when a site attempts to load an insecure resource, you will get a message in your error log, and you can use this information to fix your site.