ipinfo, networking’s Swiss army knife

A few days ago I saw this post from Marco d’Itri on Mastodon. He is such an expert in Networking matters that if he says that a tool is useful, it is a good reason in itself to try it out even if networking is not your daily bread. So I tested ipinfo and indeed it’s so great that I immediately added it to all my PCs!

Ipinfo is the official CLI for the IPinfo.io IP address API. It collects in a single tool some of the functionality offered by other tools like whois or ipcalc (for example), in addition to geolocation, subnetting, IP validation, and more.

Before I start presenting my favourite functionalities, please note that there is a limit of 1000 API calls per IP for non-authenticated requests. If you need more, you must use the authenticated API, which also includes an option for a free plan.

Getting information about an IP

Similar to whois, but with a terse output and including geolocation information:

$ ipinfo 8.8.8.8
Core
- IP           8.8.8.8
- Anycast      true
- Hostname     dns.google
- City         Mountain View
- Region       California
- Country      United States (US)
- Currency     USD ($)
- Location     37.4056,-122.0775
- Organization AS15169 Google LLC
- Postal       94043
- Timezone     America/Los_Angeles

My IP

The command

ipinfo myip

is very useful when your Internet connection goes through a NAT device. ipinfo will find your public IP and show all the information related to that IP, just like when you query any other IP. If you just need to extract the IP (f.e. in a script), you can filter for just that:

ipinfo myip -f ip | tail -n 1

Grep IP

When you want to extract IP addresses (IPv4, IPv6, or both) from a text file, you can use:

ipinfo grepip filename    # shows lines containing IPs and highlights them
ipinfo grepip -o filename # looks up IPs in the file and prints them
ipinfo grepip -4 filename # as above, but only looks up IPv4 addresses
ipinfo grepip -6 filename # as above, but only looks up IPv6 addresses

You can also use ipinfo grepip to filter a stream of text, for example in a pipe with other commands:

wget -O - http://example.com/address-database.txt | ipinfo grepip -6

CIDR to range

When you have an IP range in CIDR notation and you want to know what interval of addresses is represented by that range you can use ipinfo cidr2range. It works with both IPv4 and IPv6 addresses:

$  ipinfo cidr2range fe80::4580:d144:cdee:ae16/64
fe80::-fe80::ffff:ffff:ffff:ffff
$ ipinfo cidr2range 192.168.100.0/22
192.168.100.0-192.168.103.255

Range to CIDR

When you have a range of IPs, and you want to know how it can be expressed in one or more CIDR ranges, you can use ipinfo range2cidr:

$ ipinfo range2cidr 192.168.0.0-192.168.0.191
192.168.0.0/25
192.168.0.128/26

Split CIDR

ipinfo splitcidr is useful when you have a large range and you want to split it in subranges. E.g. let’s say you have a VPC in AWS whose range is 10.10.0.0/16, and you want to break it down in /19 chunks:

$ ipinfo splitcidr 10.10.0.0/16 19
10.10.0.0/19
10.10.32.0/19
10.10.64.0/19
10.10.96.0/19
10.10.128.0/19
10.10.160.0/19
10.10.192.0/19
10.10.224.0/19

If you want to have these CIDR ranges expressed as IP ranges, you can just pipe ipinfo splitcidr in ipinfo cidr2range:

$ ipinfo splitcidr 10.10.0.0/16 19 | ipinfo cidr2range 
10.10.0.0-10.10.31.255
10.10.32.0-10.10.63.255
10.10.64.0-10.10.95.255
10.10.96.0-10.10.127.255
10.10.128.0-10.10.159.255
10.10.160.0-10.10.191.255
10.10.192.0-10.10.223.255
10.10.224.0-10.10.255.255

Tools

The ipinfo tool subcommand provides a number of very useful tools, among which I’ll mention:

  • is_ipv4 reports if the given input is an IPv4 address
  • is_ipv6 reports if the given input is an IPv6 address
  • is_valid reports if the given input is a valid IP address
  • prefix is_valid reports if a given input is a valid CIDR range

Installation

Ipinfo is available for many operating systems, in packages of different formats. Refer to the installation instructions in Github, or refer to the Releases page for the manual installation from one of the provided packages.

Documentation

There is no “formal” documentation, but you can refer to the Quick Start section of the README in Github.

More DNS tooling

DNS icon from https://greenfrognest.com/dns.php

My LinkedIn post about the article Five DNS client tools, and how to use them received good feedback with a lot of suggestions and alternatives to the tools presented. As that LinkedIn post ages, it may also become harder to find, so I thought I will summarise those contributions here. It’s three DNS tools that can be easily installed on any container running in kubernetes by unpacking a compressed archive and copying the program’s executable file to the container. In addition, “side-channel” techniques are also mentioned. Let’s see!

Installation

All the three tools presented in the article provide a compressed archive. Assuming we are dealing with Linux, the general recipe is:

  1. unpack the archive in a local temporary directory
  2. find the executable file for the tool
  3. copy it to the container using kubectl cp
  4. get a shell in the container
  5. ensure that the file is executable in the container using chmod u+x filename
  6. ensure that the file is either in your PATH, or that you execute it from the directory you copied it in by means of the ./filename syntax

Since this part is common to all the three, I won’t repeat it everywhere.

dog

Suggested by Ed Daniel.

dog has a nice colored output which may make the result of a query easier to understand when there is a lot of data. Allegedly, the program is also able to understand if your terminal doesn’t support colors and offer a black/white alternative. It can also output JSON, which can make the information easier to process by other tools.

You can download dog from github. The latest version is 0.1.0, released in November 2020.

Usage

Examples from github, refer to the github page for more information.

dog example.net Query a domain using default settings
dog example.net MX …looking up MX records instead
dog example.net MX @1.1.1.1 …using a specific nameserver instead
dog example.net MX @1.1.1.1 -T …using TCP rather than UDP
dog -q example.net -t MX -n 1.1.1.1 -T As above, but using explicit arguments

Installation

Use the provided ZIP package. Unpack it in a temporary directory in your computer. The dog binary is in the bin subdirectory. Apply the recipe at the beginning of this article.

doggo

Suggested by Ed Daniel.

doggo is a tool inspired by dog, written in go (hence doggo: dog + go). It shares some of dog‘s nice features, and adds more on its own. The output is also a bit different. It has a rich command-line help, and the information and examples on the github page are exhaustive.

doggo is in active development. You can download doggo from github. The latest version is 0.5.7, released in September.

Usage

You are welcome to check the extensive usage examples from the github page. I’ll just report the simplest examples as a reference:

$ doggo mrkaran.dev                                                                         
NAME            TYPE    CLASS   TTL     ADDRESS         NAMESERVER   
mrkaran.dev.    A       IN      20s     13.250.205.9    127.0.0.1:53
mrkaran.dev.    A       IN      20s     206.189.89.118  127.0.0.1:53
$ doggo -t MX -n 9.9.9.9 github.com
NAME            TYPE    CLASS   TTL     ADDRESS                         NAMESERVER 
github.com.     MX      IN      3600s   10 alt3.aspmx.l.google.com.     9.9.9.9:53
github.com.     MX      IN      3600s   5 alt1.aspmx.l.google.com.      9.9.9.9:53
github.com.     MX      IN      3600s   10 alt4.aspmx.l.google.com.     9.9.9.9:53
github.com.     MX      IN      3600s   5 alt2.aspmx.l.google.com.      9.9.9.9:53
github.com.     MX      IN      3600s   1 aspmx.l.google.com.           9.9.9.9:53
$ doggo -t MX -n 9.9.9.9 github.com
NAME            TYPE    CLASS   TTL     ADDRESS                         NAMESERVER 
github.com.     MX      IN      3600s   10 alt3.aspmx.l.google.com.     9.9.9.9:53
github.com.     MX      IN      3600s   5 alt1.aspmx.l.google.com.      9.9.9.9:53
github.com.     MX      IN      3600s   10 alt4.aspmx.l.google.com.     9.9.9.9:53
github.com.     MX      IN      3600s   5 alt2.aspmx.l.google.com.      9.9.9.9:53
github.com.     MX      IN      3600s   1 aspmx.l.google.com.           9.9.9.9:53

Installation

Download the Linux ZIP package and unpack it in a temporary directory: the doggo executable is right there. Apply the recipe at the beginning of this article.

q

No, it’s not a typo nor a mistake: there is actually a DNS tool called just q!

Suggested by Chris Buijs.

Compared with the other two tools, and the comparison is made by the Author of q himself, q is the most feature rich of all clients. If you are debugging very exotic DNS problem, this one could be your tool. If not, it can still be your Swiss knife for DNS debugging. q supports all of the nice things from dog and doggo, and then adds some more of its own: were you missing YAML output? Well, here you have it. Do you need to set some special flags to your DNS query? Here you can.

q is in active development. The latest version is 0.15.0 and was released just three hours before I started writing this article!

Usage

As before, I’ll report only the basic examples from the github page which, by the way, doesn’t contain many of them. There is also a demo hosted in ASCIInema, but not much more.

q example.com Lookup default records for a domain
q example.com MX SOA …or specify a list of types
q example.com MX @9.9.9.9 Query a specific server
q example.com MX @https://dns.quad9.net …over HTTPS (or TCP, TLS, QUIC, or ODoH)…
q @sdns://AgcAAAAAAAAAAAAHOS45LjkuOQA …or from a DNS Stamp
q example.com MX --format=raw Output in raw (dig) format
q example.com MX --format=json …or as JSON (or YAML)

Installation

There are deb and rpm packages that can be installed with native tools in many different distributions. Or, you can download the tar.gz package and unpack it in a temporary directory: the q executable is right there. Apply the recipe at the beginning of this article.

The side channel

What happens if you don’t have tools on the container and you cannot install one in any way? There is still hope. Suggested by Danny Sauer, you can use a “side channel” solution. For example, you can use commands that are not designed for the case you have at hand, but they will return the information you are interested nonetheless:

often you’ll find that ping is installed, while the dnsutils aren’t. Even if a target doesn’t actually respond to ICMP echos, ping usually prints the IP to which the name resolves – and has the benefit of using gethostbyname probably the same way as the program which isn’t working. 🙂

As Danny suggests, this is true also for other cases:

Ping -r to replace the missing traceroute is also handy.

Danny also contributed a gist to the discussion, where he shows how you can work around missing commands on a very thin container. Basically, you use low level interfaces to manually gather the information that dedicated commands would otherwise gather and pretty print for you. It’s an incredibly interesting reading, and you will do yourself a big favour if you take five more minutes to read that gist! (this post is about to finish anyway).

I wrote up a related gist a few years ago, speaking of this. It shows examples of doing some typical ps and lsof stuff with just ls, cat, and maybe tr on files in /proc in a pretty restrictive container. I need to fix the spelling and punctuation errors now that I reread it, but the actual code is accurate. 😅

https://gist.github.com/dannysauer/022dde4cea0eb9baa64fa99299ca40f2

Thanks

I’ll take the opportunity to thank Ed, Chris, and Danny for taking the time to share information about their favourite tools. I’d also like to thank Aleksey for his comment and all those who reacted to the LinkedIn post. This all was much appreciated!

Five DNS client tools, and how to use them

Everything is a Freaking DNS problem“, as Kris Buytaert often puts it. Debugging any distributed system can be a daunting task in general, and DNS is no exception. But even debugging an internal DNS service, which won’t be as nearly as distributed as the global domain name service, may turn out to be an unpleasant experience: think Kubernetes and coredns, for example.

Debugging DNS-related problems in containers running in Kubernetes can be a challenge indeed, in that containers running in a cluster may be based on completely different Linux images, each one sporting a different DNS client, if any. In those cases, it’s better to have an idea on how to use whatever client you happen to find on those containers, or install one yourself. Fear not, I have prepared an outline, just for you!

nslookup, the oldies but goldies

nslookup is maybe the first generation of DNS query tools that comes from the BIND DNS server project. It can be used in both interactive and non-interactive mode. In the non-interactive mode you make a query directly on the command line, you get an answer, and the command exits:

$ nslookup www.google.com
Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
Name:	www.google.com
Address: 142.250.74.100
Name:	www.google.com
Address: 2a00:1450:400f:80c::2004

nslookup uses the name servers that are configured on your system by default. You can use a different one by specifying it on the command line as the second argument:

$ nslookup www.google.com 8.8.8.8
Server:		8.8.8.8
Address:	8.8.8.8#53

Non-authoritative answer:
Name:	www.google.com
Address: 216.58.211.4
Name:	www.google.com
Address: 2a00:1450:400f:801::2004

If you run nslookup without arguments, you enter interactive mode, in which you can run several queries and also tweak how the query is performed

$ nslookup
> www.google.com
Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
Name:	www.google.com
Address: 142.250.74.132
Name:	www.google.com
Address: 2a00:1450:400f:803::2004
> www.facebook.com
Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
www.facebook.com	canonical name = star-mini.c10r.facebook.com.
Name:	star-mini.c10r.facebook.com
Address: 31.13.72.36
Name:	star-mini.c10r.facebook.com
Address: 2a03:2880:f10a:83:face:b00c:0:25de
> set querytype=mx
> gmail.com
Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
gmail.com	mail exchanger = 10 alt1.gmail-smtp-in.l.google.com.
gmail.com	mail exchanger = 20 alt2.gmail-smtp-in.l.google.com.
gmail.com	mail exchanger = 40 alt4.gmail-smtp-in.l.google.com.
gmail.com	mail exchanger = 30 alt3.gmail-smtp-in.l.google.com.
gmail.com	mail exchanger = 5 gmail-smtp-in.l.google.com.

Authoritative answers can be found from:
> 

In the example above, we query the DNS server for the address of www.google.com and www.facebook.com. Then we switch the query type to MX (mail exchanger), and we check which servers handle email for the gmail.com domain.

This should be enough to get you going, see the nslookup man page for more info.

host, nslookup’s younger brother

host is the second generation of DNS query tools from the BIND project. Its basic usage is:

$ host www.google.com
www.google.com has address 216.58.207.228
www.google.com has IPv6 address 2a00:1450:400f:80c::2004

Like nslookup, you can specify a DNS server to resolve your query as the second argument of the command:

$ host www.google.com 8.8.8.8
Using domain server:
Name: 8.8.8.8
Address: 8.8.8.8#53
Aliases: 

www.google.com has address 142.250.74.100
www.google.com has IPv6 address 2a00:1450:400f:80b::2004

And you can query different types of records as well, like e.g. MX:

$ host -t mx gmail.com
gmail.com mail is handled by 10 alt1.gmail-smtp-in.l.google.com.
gmail.com mail is handled by 20 alt2.gmail-smtp-in.l.google.com.
gmail.com mail is handled by 40 alt4.gmail-smtp-in.l.google.com.
gmail.com mail is handled by 30 alt3.gmail-smtp-in.l.google.com.
gmail.com mail is handled by 5 gmail-smtp-in.l.google.com.

host has no interactive mode, but that doesn’t mean you can’t tweak your queries. In fact, a number of command line options are there to help you. See the host man page for more info.

dig, the swiss army knife

dig is the third generation tool for DNS queries from the BIND project. It’s very powerful in that it reports a lot of data about your queries and you can fine tune it in all possible ways. At the same time, it’s default format is very verbose, which makes it quite confusing at first.

Let’s query www.google.com once again, using dig:

$ dig www.google.com

; <<>> DiG 9.16.44-Debian <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9932
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.google.com.			IN	A

;; ANSWER SECTION:
www.google.com.		285	IN	A	142.250.74.100

;; Query time: 4 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Sun Sep 24 22:56:36 CEST 2023
;; MSG SIZE  rcvd: 59

Quite chatty as you can see. You can make it less chatty easily though:

$ dig +short www.google.com
142.250.74.68

Here you see that only the IPv4 address was reported, but we know from previous examples that www.google.com also has IPv6 addresses, so why aren’t they displayed?

By default, dig resolves names to addresses by querying A records, and addresses to names by querying PTR records. DNS names are associated to their IPv6 addresses in AAAA records, and that’s what you need to query in order to resolve those. The two command lines below are equivalent:

$ dig +short -t AAAA www.google.com
2a00:1450:400f:804::2004
$ dig +short www.google.com AAAA
2a00:1450:400f:804::2004

If you want to use a DNS server other than the default, set it as the first argument of the command, prefixed by @:

$ dig +short @8.8.8.8 www.google.com AAAA
2a00:1450:400f:803::2004

This is just a brief introduction, but I can’t just leave you to the man page for dig: it’s so large and complete that it may feel as daunting as the DNS problems you are trying to debug. In that case, have a look at Julia Evans’ comics about dig and how to read dig output.

Finally, remember that you can use a .digrc file to set your default options instead of specifying them all the time on the command line like I just did (although you may not do that when debugging a problem inside a container). Check the man page for info.

getent, back to basics

getent is probably the oldest tool to offer name resolution capabilities. I don’t have any proof to support my claim, but it’s actually the absence of any historical information from both the man page and the source code that makes me believe that it has been around forever.

Whatever the birthdate, getent is also a different type of beast compared to the three tools we have seen so far. In fact, while nslookup, host, and dig are specialised on DNS only, getent is a more general tool that can be used to query several system databases like, e.g., the password file:

$ getent passwd root
root:x:0:0:root:/root:/bin/bash

getent is also different in the way it does name resolution. In fact, getent leverages the C library directly, and resolves names according to the configuration in /etc/nsswitch.conf. Explaining the Name Service Switch functionality is definitely out of scope here; suffice it to say that, depending on how the functionality is configured, not only will getent return names resolved via DNS, but also names resolved through the hosts file or the .local names in our home network. You need to keep that in mind in case you are querying a name that is registered in both DNS and the hosts file, for example.

But enough talking! So, how does one resolve a name with getent?

$ getent hosts www.google.com
2a00:1450:400f:80c::2004 www.google.com

OK, this is only one address and an IPv6 one though. Any ways around that? Of course!

$ getent ahosts www.google.com
2a00:1450:400f:80a::2004 STREAM www.google.com
2a00:1450:400f:80a::2004 DGRAM  
2a00:1450:400f:80a::2004 RAW    
216.58.207.228  STREAM 
216.58.207.228  DGRAM  
216.58.207.228  RAW

A bit verbose, but you have both IPv4 and IPv6. What about if you only want one of the two?

$ getent ahostsv4 www.google.com
216.58.207.228  STREAM www.google.com
216.58.207.228  DGRAM  
216.58.207.228  RAW    
$ getent ahostsv6 www.google.com
2a00:1450:400f:80a::2004 STREAM www.google.com
2a00:1450:400f:80a::2004 DGRAM  
2a00:1450:400f:80a::2004 RAW

getent also allows for resolving more than one name with one single call:

$ getent hosts www.google.com www.facebook.com
2a00:1450:400f:80a::2004 www.google.com
2a03:2880:f10a:83:face:b00c:0:25de star-mini.c10r.facebook.com www.facebook.com

What if you want to query other DNS record types besides doing name resolutions, or use a different name server than the one that’s configured in the system? You can’t. getent is part of the C library tools and uses system calls to query information (e.g. gethostbyname or gethostbyaddr) and those calls don’t include the resolution of other record types.

getent is small and lightweight, so it may appear even in lightweight base images, unless their creators really went hard on the optimization. It’s worth to know the basics just in case it’s the only DNS query tool that you have at hand. See the man page for more information.

resolve, the hidden perl

resolve is a Perl script that I wrote when I didn’t know about getent. The functionality is the same in that it uses system calls under the hood to do name resolution, but I believe it provides a more consistent and complete output compared to getent. An example:

$ resolve www.google.com www.facebook.com
www.google.com ipv6 2a00:1450:400f:80c::2004
www.google.com ipv4 172.217.21.164
www.facebook.com alias star-mini.c10r.facebook.com
www.facebook.com ipv6 2a03:2880:f10a:83:face:b00c:0:25de
www.facebook.com ipv4 31.13.72.36

Just as getent hosts, resolve can resolve more than one name at a time. Unlike getent, it clearly marks IPv4 and IPv6 addresses, and it clearly reports about aliases/CNAMEs, too.

You can find more details about resolve and why I wrote it in the article Name/address resolution from the perspective of the OS in this same blog. You’ll find the code, installation instructions, and a description of the differences between resolve and getent in the GitHub repository.

If you come across a container that is so stripped down to not have any of the other tools, but it has Perl, you can as well give resolve a try. On the other hand, I don’t expect you to really come across such a case so often, so you may as well fall back to the last resort…

The last resort

If the container you are debugging in has no DNS tools and no Perl, your last resort is to install one of these tools yourself, if you know how to use that container’s distribution package management tools. If you don’t, then you need an article like this one, but for package managers. Shall we write one together? I volunteer for apt!

Welcome bookworm! And how to continue running apt-get update

Debian 12 “bookworm” was officially released two days ago, yay!

And just like me, this morning your attempt to update the apt package cache may have been met by an odd notification similar to this one:

E: Repository 'http://deb.debian.org/debian testing InRelease' changed its 'Codename' value from 'bookworm' to 'trixie'
N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.

Why it’s happening

I have this source configured in apt:

deb-src http://deb.debian.org/debian/ testing main contrib non-free

The source refers to the distribution “testing”. The codename for testing is the same of the next Debian release. Before the release of Debian 12 it was “bookworm”. Now that bookworm is released, that codename switched to “trixie”. In my particular case, this is more or less harmless, as this source is not going to trigger the installation of any package. But if I was using “testing” or “stable” in my apt sources, that would make a difference: I may unintentionally install packages from Debian 12 on my Debian 11 and make a mess of my system.

The error and the notification are there to warn you that there was a codename change, and that you should consider if this is expected and you actually want to continue, or if you’d rather lock your sources to the current codename instead (that would be “bullseye” in Debian 11’s case).

What to do

Lock your package sources to the correct codename. E.g. if you are running Debian 11 and you have “stable” in your apt sources for the official Debian repositories, replace “stable” with “bullseye”. Note that for third party repos this may be different, check with the vendor for instructions.

If, like in my case, the change is harmful, you need to let apt know that you approve the change. That’s what we’ll see below in detail.

Accepting the codename change

The notification points to apt-secure. If you are like me, the next command you ran was man apt-secure. That helped finding more about the reason why this was happening, but not with the solution, alas:

INFORMATION CHANGES
       A Release file contains beside the checksums for the files in
       the repository also general information about the repository
       like the origin, codename or version number of the release.

       This information is shown in various places so a repository
       owner should always ensure correctness. Further more user
       configuration like apt_preferences(5) can depend and make use
       of this information. Since version 1.5 the user must therefore
       explicitly confirm changes to signal that the user is
       sufficiently prepared e.g. for the new major release of the
       distribution shipped in the repository (as e.g. indicated by
       the codename).

This is nice. Except that it doesn’t mention how one is supposed to explicitly confirm changes.

Some more digging and the man page of apt-get provided the solution:

       --allow-releaseinfo-change
           Allow the update command to continue downloading data from
           a repository which changed its information of the release
           contained in the repository indicating e.g a new major
           release. APT will fail at the update command for such
           repositories until the change is confirmed to ensure the
           user is prepared for the change. See also apt-secure(8) for
           details on the concept and configuration.

           Specialist options (--allow-releaseinfo-change-field) exist
           to allow changes only for certain fields like origin,
           label, codename, suite, version and defaultpin. See also
           apt_preferences(5). Configuration Item:
           Acquire::AllowReleaseInfoChange.

Running apt-get update –allow-releaseinfo-change returned the notification part again (the message prefixed with “N:“) but not the error (“E:“). Subsequent runs of apt/apt-get ran as usual. Problem solved 🙂

apt-key is deprecated, part 2

In my first article about the deprecation of apt-key I illustrated a few ways of adding APT repository keys to your system without using the apt-key command. A good follow-up discussion to that article started on twitter (thanks to Petru Ratiu). The topics we discussed were: the use of the signed-by clause and if it really helps increasing security; the use of package pinning to avoid third-party packages taking over official packages; and the pollution of system directories.

In this post we dig a bit deeper into these topics and how they help, or don’t help, making your system more secure. A TL;DR for the impatient is included at the end of each section.

Continue reading

apt-key is deprecated, now what?

It’s only a few weeks since I upgraded one of my systems from Debian 10 to Debian 11. In fact, I use to apply a “Debian distribution quarantine”: when a new major version of the distribution is out, I usually wait until a “.1” or “.2” minor version before installing it, as I don’t have enough time to debug problems that may have escaped Debian’s QA process at the very first release.

One of the first things that catch one’s attention when I ran the apt-key command in Debian 11 (e.g. a simple apt-key list) was a warning:

Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8))

“Deprecated” usually means that a certain functionality will be eventually removed from the system. In this case, Ubuntu users will be hit already in 2022 with the release of 22.10 in October as the command will be available last in the next LTS (22.04) to be released in April. Debian users will have more time, as the command won’t available in the next major release of Debian (supposedly Debian 12, that may be a couple of years away). This is written in clear letters in the man page:

apt-key(8) will last be available in Debian 11 and Ubuntu 22.04.

So, what are you supposed to do now in order to manage the keys of third party APT repositories?

Continue reading

Bash scripting: using ‘read’ without a loop

Another post in the “note to myself” style.

For anyone who does bash scripting, the command read is a well known tool. A usual task that we use read for it is to process the output of another command in a while loop, line by line, picking up a few fields and doing something with them. A stupid example:

sysctl -a 2> /dev/null | grep = | while read PARM DUMMY VALUE
do
  echo "Value for $PARM is $VALUE"
done

That is: we read the output of sysctl, line by line, selecting only the lines that contain a = sign, then read the name of the setting and its value in PARM and VALUE respectively, and do something with those values. So far so good.

Based on what we have just seen, it’s easy to expect that this:

echo foobar 42 | read PARM VALUE
echo "Value for $PARM is $VALUE"

would print “Value for foobar is 42“. But it doesn’t:

Value for  is 

So, where did those values go? Did read work at all? In hindsight I can tell you: yes, it worked, but those values have disappeared as soon as read was done with them. To both parse them and use them you have to run both read and the commands using the variables in the same subshell. This works:

echo foobar 42 | ( read PARM VALUE ; echo "Value for $PARM is $VALUE" )

Or even

echo foobar 42 | (
    read PARM VALUE
    echo "Value for $PARM is $VALUE"
)

This will print “Value for foobar is 42”, as expected.

Terraform modules release

I am publishing four Terraform modules today. This code has lived into a private repository of mine for two years and I decided that it was time to dig it out, put it in dedicated, public repositories and release it. Two of them were my first experiments in making Terraform modules and probably too simple for general use. The other two are related to CFEngine: one helps making CFEngine test clients, the other making CFEngine servers.

As always, this code is released with a GPL license in the hope that it will be useful to more people than just myself.

Continue reading

Reading one-line lists with the Bash shell

Commands like the AWS CLI may return a list of values all in one line, where each item in the list is separated by the nearby items with spaces. Using a plain read command doesn’t really work: read will read all the values in one go into the variable. You need to change the delimiter that read uses to split the input. No need to pipe the output through Perl or other tools, read got you covered with the -d option.

In this example I get the list of the ARNs of all target groups in an AWS account, and then iterate over those ARNs to list all the instances in each target group. The ouput will also be saved into a file through the tee command:

aws elbv2 describe-target-groups \
  --query 'TargetGroups[].TargetGroupArn' \
  --output text | \
  while read -d ' ' ARN ; do \
    echo -n "$ARN: " ; \
    aws elbv2 describe-target-health \
      --target-group-arn "$ARN" \
      --query 'TargetHealthDescriptions[].Target.Id' \
      --output text ; sleep 1 ; \
  done | \
  tee tg-instances.txt

The ouput of this one liner will be in the format:

ARN: instance_ID [instance_ID...]

Things to notice:

  • the AWS CLI’s describe-target-groups command will list all target groups’ ARNs thanks to the --query option and list as many as possible on single lines, according to the shell’s output buffer capacity; the ouput is piped through a while loop;
  • the while loop uses read -d ' ' to split each line at spaces and save each item in the $ARN variable, one per cycle;
  • the echo command prints the value of $ARN followed by a colon, a space, but will not output a newline sequence due to the -n option;
  • the AWS CLI’s describe-target-health command will list all target IDs thanks to the --query option and print them out in a single line; it will also provide a newline sequence, so that the next loop will start on a new line;
  • the sleep 1 command slows down the loop, so that we don’t hammer the API to the point that they will rate limit us;
  • finally, the tee command will duplicate the output of the while loop to both the standard output and the file tg-instances.txt.

Experimenting with Python

The SRE team at RiksTV, 2020-2021

In our team at RiksTV, the company I joined in March 2020, we use Python. I had never used Python before and I’m working as hard as I can to fill the gap.

During the Christmas break I assigned myself a small coding challenge, both to test what I have learned so far and to avoid that new knowledge to be washed away. I decided to share that code, and will continue sharing as I keep learning and whenever I make something that could be useful to more people than just myself. Head to github if you are interested.

Enjoy!