Mac Screenshot Drop Shadows

In web development, taking screenshots to share with others (e.g. clients) is something that is done with some frequency. By default, taking screenshots produces hard-edged images and doesn’t look all that great. In order to add a little bit of a professional looking something extra, I like to add drop shadows to screenshots.

I work mainly on a Mac and the default screenshot functionality gives you several options for taking screenshots. Without any special software, like Photoshop, you can use the Preview app to annotate image with ease. However, one of the limitation of Preview is the inability to add treatments like drop shadows.

There are lots of image manipulation apps you can use to add drop shadows, but then you need to open another app to edit the image to add a shadow. Using Automator and a couple of command line tools you set up a keyboard shortcut that does this for you. It takes a bit of setup, but once you have the keyboard shortcut set it feels like built-in functionality regardless of what is going on behind the scenes.

My Solution

I took to the internet to find a solution. I know I am not the only one that wants to add drop shadows to screenshots, so I assumed someone had come up with a solution. After all, adding drop shadows is common enough for some 3rd Party screenshot tools to have a button for it. I am happy with the default screenshot capabilities of my Mac so I did not want to install a screenshot app solely to have drop shadow capabilities.

I came across a post on Stackoverflow that did most of what I was looking for. The main issue I had with the SO solution is it saved the image with the drop shadow to the clipboard instead of creating a file. This means I would have to use another app to get the image off of my clipboard. You can use the Preview app for this, which is not really an inconvenience because I am usually annotating the images anyway. Still, I’d rather have the screenshot saved to my Desktop just like default functionality. The script I found was easy enough to understand that I was able to tweak it to achieve what I was trying to do.

Here is the solution I am using:

/usr/local/bin/pngpaste /tmp/to-add-dropshadow.png
/usr/local/bin/convert /tmp/to-add-dropshadow.png \( +clone -background transparent -shadow 30x15+10+10 \) +swap -background transparent -layers merge +repage /tmp/has-drop-shadow.png 2>/dev/null
DATECMD=`date "+%Y-%m-%d %H.%M.%S"`
/bin/cp /tmp/has-drop-shadow.png ~/Desktop/"Screenshot $DATECMD".png

Like in the referenced Stackoverflow you have to install a couple commands, but unlike the post you do not need the command used to copy a temp file back to the clipboard. Instead of copying the output to the clipboard I simply copy it to the Desktop.

To get started you need to install a couple of commands using Homebrew. If you don’t have Homebrew you should go get it because it makes installing command line tools a breeze. The two tools you need are pngpaste, used to paste the clipboard content to a file that can be manipulated, and imagemagick, used to manipulate the image file. Installing them with Homebrew is super simple:

$> brew install pngpaste imagemagick

Hit enter and let Homebrew do its magic. Once these tools are installed you can create an Automator workflow that you can map to a keyboard shortcut. You may need to update the script depending on where Homebrew installs commands. To find where your commands are located you can use the which command. It looks like this:

$> which pngpaste
/usr/local/bin/pngpaste

In my case the commands are located in /usr/local/bin. It is safe to assume both of the required commands are in the same directory, but you use which again for the other command, convert to make sure. The convert command comes from the imagemagick library.

Setting up the script

The linked StackOverflow post has a more thorough set of instructions with screenshot, but here is a quick overview.

  1. Open the Automater app, and create a new “Quick Action”.
  2. Search for “Run Shell Script” and double click on it.
  3. In the “Workflow receives” drop down, select “No input”.
  4. Paste the above script into the text area (make sure to use the correct paths to the commands you installed) and save the workflow. Give it a meaningful name, like the recommended “Add Dropshadow To Clipboard Image” so you can find it easily when setting up the keyboard shortcut.
  5. Open the “System Settings > Keyboard” menu and click the “Keyboard Shortcuts” button.
  6. Next, select “Services”, expand the “General” section and your new service should be listed. Double click on the “none” and enter your desired Keyboard Shortcut by pressing the key sequence you want to use. I opted to use the suggested CMD + CTRL + Shift + 5 shortcut because it is almost the same as the shortcut use to take the screenshot making it easier for me to remember.

That is all you need to do, to test it out take a screenshot using CMD + CTRL + Shift + 4, punch in your new shortcut, and look at your Desktop, you should see the new image there. It’s hard to tell from the thumbnail, but there is a subtle drop shadow.

Additional Notes

The way this script is configure the shadow is very subtle, if you want a stronger shadow you can adjust your script. The 30x15+10+10 is the setting that controls the shadow’s appearance. This breaks down into opacity x blur_strength + horizontal_distance + vertical_distance and is documented in the imagemagick documentation here. Do be honest, the docs are not that helpful. Opacity, or how transparent the shadow is, can be a number (percentage) from zero (0) meaning completely opaque, to 100, completely transparent. Blur strength, or blur radius is how blurry the shadow appears, the higher the number the blurrier the shadow. The horizontal and vertical distances determine the shadows direction. The higher these numbers the more down and to the right the shadow appears, the lower the numbers (even negative) the more up and to the left the shadow appears. If set to zero (0) the shadow will appear directly behind the image with an equal amount of the shadow peaking out around all of the edges.

The filename used for the screenshot mimics the default screenshot filenames used for the US locale, but not exactly, I have configured the script to use the 24 hour format so the screenshots I take in the afternoon appear after the screenshots I take in the morning. If you want to configure the format to match the default you have to manually modify the DATECMD settings to be +%Y-%m-%d %l.%M.%S %p. You can change this to whatever you want. The reason the date is broken out from the final copy to the desktop is because of the space between the date and the time. Putting this inline with the copy command causes errors when attempting to rename the file.

This script makes use of the temporary files, the raw version of the screenshot to be manipulated (/tmp/to-add-dropshadow.png) and the manipulated version. The manipulated version (/tmp/has-drop-shadow.png) is ultimately what is copied to the desktop.

The script is just a couple of shell commands, this means that any shell commands can be run, so you can modify this script to do anything you want, for instance have the script clean up after itself by removing the /tmp files.

Conclusion

Adding subtle drop shadows can add a level of professionalism to screenshots. Most of the time the shadow will go unnoticed and may not feel like it is worth the effort. If you are adding the screenshots to documentation it helps the image “pop” and feel less hard-edged.

Here is a comparison of screenshots with and without the drop shadow. Seeing them next to each other lets you see just how subtle the shadow is and how much more “professional” it looks with the shadow. It also helps the image standout instead of blending in with the background.

The solution in this post is just one of many solutions for adding drop shadows to images, but it is one I am happy with. Although it requires two keyboard shortcuts, one to take the screenshot and one to add the drop shadow and save the image to the Desktop it prevents having to open an image manipulation application. This solution required me to switch from my favorite CMD + Shift + 4 shortcut to the CMD + CTRL + Shift + 4 shortcut, which saves to the clipboard instead of the Desktop. But, with the use of one additional shortcut I have a screenshot I am happy with and do not have to manipulate.

Optional properties with JavaScript

JavaScript has been around for ages and as a backend developer I have usually left it alone to let the “Front-Enders” to deal with unless absolutely necessary. That’s not to say I haven’t written my fair share of JS over the years, after all I’ve been dealing with it in one form or another since you were required to specify the JS version in the tag all the way back to version 1.2 and jQuery wasn’t a thing.

Since I have started dabbling more and more with custom Gutenberg Blocks it seems as though I am learning something new everyday. One of the cool things I have learn recently is optional properties. Although this may not be the actual term used to describe the functionality it is still cool.

Say you have the following object in it’s ideal state (abbreviated for clarity):

const postObject = {
    ...
    title: {
        rendered: "My Title",
        ...
    },
    ...
}

In order to access the rendered title you simple call:

postObject.title.rendered

Easy enough, but what if your object is not in the ideal state, what if it’s empty? Not a problem! Javascript makes this super easy. Instead of having to check hasOwnProperty a bunch of times you can simply use the optional property checking using ? and be on your way:

postObject?.title?.rendered

Here is a more practical example using JSX (because Gutenberg and React and it is much clearer):

return (
    <p>{ ( postObject?.title?.rendered && postObject.title.rendered ) || '' }</p>
);

This will either output the rendered title or an empty string if either title or title.rendered are not found in your postObject object.

This makes your code much cleaner and as a result easier to read. As a Backend dev I really wish PHP made it this easy without throwing a pile of notices, warnings, or errors at you, like empty($post_object->title->rendered) does.

Yes, I know data integrity could also solve the problem by ensuring the data was always formatted correctly so you don’t need to worry about it but it is not always possible, especially when working with 3rd-party REST API’s.

I know it’s nothing special, just something I thought was neat. I hope you were able to learn something from this. I linked the MSDN documentation above if you want to know more!

Update

PHP 8.0 introduced the null-safe operator which offers functionality very similar to Javascript in that you can now do things like:

echo $my_object?->maybe_a_property || 'something';

This is just a quick example, I recommend reading the linked article for more details and learn the rest of what this functionality can do.

Keep in mind, this only works if the property (or method) would return null, this does not replace an empty check, see this example:

...rest of the code building $my_object...
$my_object->maybe_a_property = '';
$my_var = $my_object?->maybe_a_property || 'something';
var_dump( $my_var );

Output:
string(0) ""

However
$my_object->maybe_a_property = null;
$my_var = $my_object?->maybe_a_property || 'something';
var_dump( $my_var );

Output:
string(9) "something"

Performance Revisited

Here I am, almost 2 years later, realizing I have learned more about performance in the last couple of weeks than I had ever imagined. Up to this point in my career I have mainly focused on performance and how it relates to the backend of web applications. As I start a new chapter in my life, I have come to realize frontend performance has come along way from stitching together files and minifying everything.

For anyone interested in a deep dive I recommend you checkout web.dev. Although most people thing of Google as a search engine, the developers there have done an outstanding job documenting various metrics calculations as well as the steps develops should be taking to ensure web applications score as best as possible several key areas (Core Web Vitals). Now, this is not to say they are just sitting there pointing there finger saying “you need to do some arbitrary thing to ensure some academic computation looks good” but they show various techniques they have used to improve performance and metrics scores.

Core Web Vitals

Unless you have been developing websites in a cave somewhere, Core Web Vitals (CWV) is something you have probably seen at least once in the last couple of months. Why is that? CWV consists of the metrics Google has determined to be the most important when it comes to not only speed, but perceived speed of a web application. The metrics included in CWV are Cumulative Layout Shift (CLS), First Input Delay (FID), and Largest Contentful Paint (LCP). The linked pages do a much better job explaining these metrics than I can and I recommend you read the docs. CLS is the amount of space your content shifts during page load, FID tracks how long it takes before a user can interact with your site (e.g. click a link) and LCP shows how long it took for the largest content block (text, image, video, etc) to finish loading.

In addition to explaining the metrics, the documentation also provides demonstrations of what is measured and offers suggestions on how to fix common issues.

But that’s not all

In addition to the core metrics there are loads of others, such as Speed Index (SI), First Contentful Paint (FCP), Total Blocking Time (TBT), Time to Interactive (TTI), and Time to First Byte (TTFB).

TTFB gets a lot of attention because it tracks how long it takes from when a request is sent to when the first byte of a response is returned from the server. If you have a slow connection, a slow server, or if your application is doing a lot of work before sending information to the browser, the result is a high TTFB. No amount of processing power on the user’s end is going to make your site load faster if it takes your server 3 seconds to respond.

As a backend developer, TTFB has been the only metric I have really given any thought because I felt it was my job to ensure my application returned data as fast as possible. Only after diving into performance feet first (OK, I was pushed) did I realize it was actually a small piece of the puzzle. In addition to faster backend code, TTFB can usually be improved with caching and CDNs, however, that’s not all you can do. Deferring non-essential scripts/css as well as the old standby of minifying and stitching your JS/CSS files can go a long way.

Now what?

Hopefully I have at least made you aware of the fact there is more web performance than just smaller files and less round trips to the server. In addition to the web.dev site I mentioned there are loads of resources available for those wishing to dive deeper (or find those “solutions” I mentioned) I recommend watching several (or all) of Google’s 2020 Dev Summit, and the Website Performance Optimization course on Udacity. The dev tools used in the course are a bit outdated and have been moved around in newer versions of Chrome but the content is still really valuable.

Time permitting, I may add more about this topic as I dive deeper. Performance is a topic I have always been fascinated with and now I have seen how deep the rabbit hole can go I am seeing just how little I knew about optimizations on the frontend.

A quick note on performance

Performance discussions have been ramping up over the last few years. So much of an individuals life is spent online so companies are always trying to find ways to get things to the customer faster. Amazon.com has same day delivery in some areas, many internet companies are working on (or already have) gigabit connections for customers.

As developers, we should be doing our part. Our code should be efficient, bug free, etc. and there are several other tools and methodologies we have at our disposal to ensure this.

Code Reviews can help reduce bugs in code by having other developers look over your work. Misspell a variable name? Have the wrong scope? Double the load on the DB? Fellow developers can point these things out to you. Listen to their advice and don’t get upset. “Yes, it does double the database load, however, we account for that here with our additional caching layer, which is refreshed during non-peak times.” or “I didn’t know I should use let instead of var“. Value any feedback, good or bad, just don’t dwell on it.

Automated testing helps you not make the same mistake over again. No, not all code is testable, yes it takes time to write tests. There are many arguments against adding automated testing to a mature project, mainly time and budget. You don’t have to add tests to everything at once, you can do it one piece at a time. Just set up the test framework of your choice on your project. Write a test for the next bug you fix. Guess what, if your fix gets reverted (maybe a merge went sideways), you’ll catch it. Then, for your next code change/addition (your new code is testable, right?) write a test or two. Eventually you have a test sweet starting to take shape. The most tests you write, the more tests you want to write.

Caching is a hot topic with lots of opinions, methodologies, tools, frameworks, guru’s, etc. I’m not going to argue for or against anything here. I’m just going to highlight the benefits, and point out a downfall or two. If you are making repeated trips to the database for the same piece of data, cache it. If given a set of data, does your method always return the same for that set? If given a second set of data, does your method return the same that set? Cache the results. Once you have cached data your code doesn’t have to perform complex operations every time it’s run, you don’t have to make repeated trips the the database for the same value, etc. All this will help make your application faster.

In addition to pieces of code, you can cache full page renderings and serve up a cached version of the page instead of having to build it every time. This is very useful for pages that don’t change very often, or other files, such as CSS and JavaScript files.

Content Delivery Networks (CDNs) are servers that can serve up your images and other assets much faster than your webserver. Most CDNs share your content with other servers in different geographical locations. Your users will get the content from whichever server is closest to them, decreasing load times. Since a CDN takes care of serving you images, your webserver can focus on serving up your site.

AJAX is the term I’m going to use to refer to the next part. I want to stay away from any semantic arguments and/or framework fanboy interjection and just say there is value in this approach. Often referred to as a service based approach, your site can have static pages, be fully ADA compliant, fast, and heavily cached and still be dynamic. Using AJAX to communicate with your website’s back end for the dynamic pieces as it needs them and update the displayed content when it gets them your site can load “asynchronously,” meaning it will load more pieces at the same time. Typical sites are synchronous meaning that one piece has to load before another piece is loaded.

Conclusion

Using cached static pages, loading images from a CDN and loading dynamic content via fast API(s) that also uses caching will speed your site up tremendously.

I purposefully left this vague intending more as a quick intro/food for thought than a how-to guide. I can cover any other topics, so if you are interested, drop me a comment and I’ll see what I can do.

So, you want to build websites, huh?

Many years ago, I was introduced to HTML by my favorite high school teacher through a series of tutorials on HTMLGoodies.com titled “So, you want to learn HTML, huh?” by Joe Burns, Ph. D. These articles were written as primers and the writing style really sparked my love of building websites. I started coding that day and haven’t looked back.

Thanks for the history lesson, but now what?

Where should you start? Well, that depends on your learning style and your goal. There are many places on the internet. HTML Goodies is still around, but now we have places like Codecademy with interactive, online lessons. In addition to the self-paced tutorials you can find many instructional videos on YouTube but if in-class learning is your thing Coder Bootcamps are popping up all over the place. These abbreviated, no-fluff courses are a trial-by-fire into the ways of web development.

To get started you don’t need anything fancy, just a text editor. I recommend something like Sublime Text or VS Code to get you started. Both can be used for free and are available on Windows and OSX. Do you need one of these? No, but you do need something with syntax highlighting. As long as you have something to use you are good to go.

The next step is to get a good understanding of the basics, this includes HTML, CSS, and JavaScript. You don’t need an in-depth understanding but you should at least be familiar with each. Start with HTML, it’s the basic code used on all websites. From there, learn how to style your pages with CSS. Once you have that down move on to JavaScript (JS) to add some interactivity.

Is there anything specific you would like to know about? Drop a comment to let me know.

Time to give back

I have been building websites in some form or another since 1997. During this time I have learned a lot from a lot of different sources including books, co-workers, and of course, the internet.

I have some free time due to a recent change in my life so I would like to use this time wisely and give back. I’d like to share my knowledge in the hopes of nurturing at least one new developer’s love of the craft.

I am mostly a back-end PHP developer and I love WordPress so I will most likely focus in these areas. I am not a wordsmith nor am I an English Major so I’d like to apologize up-front for my writing style and any grammar mistakes you may find.

If there are any specific topics you’d like me to cover feel free to drop a comment on a post or message me through one of the contact links found in various places on my site.

Thanks for reading my blog and I hope you enjoy my content. I hope it’s a fun ride.