
[{"content":"","date":"April 8, 2026","externalUrl":null,"permalink":"/tags/aws/","section":"Tags","summary":"","title":"Aws","type":"tags"},{"content":"I\u0026rsquo;ve hit this problem twice now. At MetaCPAN, we were looking at using S3 as a sync target for rsync from upstream CPAN — conceptually simple, except rsync wants a filesystem and S3 very much isn\u0026rsquo;t one. More recently, I wanted to mount an S3 bucket as an image cache for Buildah. Same wall. You end up writing glue code, or reaching for a FUSE driver that may or may not be production-ready, or just redesigning around the limitation.\nAWS just launched S3 Files, which lets you mount an S3 bucket as an NFS filesystem on EC2, Lambda, EKS, and ECS. This is not \u0026ldquo;we bolted NFS onto S3\u0026rdquo; — the framing matters here. S3 stays the authoritative data source; the service puts a proper filesystem layer in front of it.\nHow the caching works\nSmall files (under 128 KB by default) get pulled onto high-performance EFS-backed hot storage on first access. Larger files stream directly from S3 without going through the hot tier. Untouched files auto-evict after a configurable window — 1 to 365 days, defaulting to 30.\nIt\u0026rsquo;s a read-through cache with NFS semantics. For workloads that need filesystem access to S3 data, that\u0026rsquo;s a genuinely clean model.\nPricing needs a close look\nHot storage runs $0.30/GB-month (USD), reads are $0.03/GB, writes $0.06/GB, with a 32 KB minimum per operation. Cold data in S3 stays at standard S3 rates (~$0.023/GB-month). The small-file minimum is the one to watch — if you have a lot of tiny files, the per-operation floor adds up fast.\nFor ML pipelines and agentic AI workloads with large files and sequential access patterns, the math probably works out cleanly. For something like CPAN, where the archive is millions of small distribution tarballs, I\u0026rsquo;d want to model it carefully before committing.\nThe \u0026ldquo;S3 is not a filesystem\u0026rdquo; thing\nCorey Quinn over at Last Week in AWS pointed out that he\u0026rsquo;s been saying \u0026ldquo;S3 is not a filesystem\u0026rdquo; for a decade, and this announcement complicates that. His read — which I agree with — is that the right call here was building a real filesystem layer on top of S3 rather than trying to make S3 itself behave like one. The design preserved what S3 is good at while giving you the interface you sometimes need.\nI wish this had existed two years ago. Both the MetaCPAN and Buildah cases would have been much more straightforward.\nvia AWS Blog and Last Week in AWS\n","date":"April 8, 2026","externalUrl":null,"permalink":"/til/s3-files-s3-as-a-filesystem/","section":"TIL","summary":"I’ve hit this problem twice now. At MetaCPAN, we were looking at using S3 as a sync target for rsync from upstream CPAN — conceptually simple, except rsync wants a filesystem and S3 very much isn’t one. More recently, I wanted to mount an S3 bucket as an image cache for Buildah. Same wall. You end up writing glue code, or reaching for a FUSE driver that may or may not be production-ready, or just redesigning around the limitation.\n","title":"AWS S3 Files: S3 Buckets as NFS Filesystems","type":"til"},{"content":"","date":"April 8, 2026","externalUrl":null,"permalink":"/tags/filesystem/","section":"Tags","summary":"","title":"Filesystem","type":"tags"},{"content":"","date":"April 8, 2026","externalUrl":null,"permalink":"/tags/nfs/","section":"Tags","summary":"","title":"Nfs","type":"tags"},{"content":"","date":"April 8, 2026","externalUrl":null,"permalink":"/tags/s3/","section":"Tags","summary":"","title":"S3","type":"tags"},{"content":"","date":"April 8, 2026","externalUrl":null,"permalink":"/","section":"Shawn Sorichetti","summary":"","title":"Shawn Sorichetti","type":"page"},{"content":"","date":"April 8, 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"April 5, 2026","externalUrl":null,"permalink":"/tags/cli/","section":"Tags","summary":"","title":"Cli","type":"tags"},{"content":"I use Granted for per-terminal AWS credential assumptions — it\u0026rsquo;s great for switching between the multiple work accounts I juggle throughout the day. But I have SSO configured across more than one organization, and every morning I was logging into each one manually, one at a time, like a chump.\nTurns out aws sso login has a --sso-session flag that targets a named session block from ~/.aws/config. So logging into multiple orgs is just two commands:\naws sso login --sso-session org-a aws sso login --sso-session org-b Each command opens a browser tab, you approve it, done. Both sessions are authenticated and ready for Granted to assume roles from — whichever account you need next.\nThe named sessions come from [sso-session \u0026lt;name\u0026gt;] blocks in your config:\n[sso-session org-a] sso_start_url = https://org-a.awsapps.com/start sso_region = us-east-1 [sso-session org-b] sso_start_url = https://org-b.awsapps.com/start sso_region = us-west-2 Individual profiles then reference these with sso_session = org-a (or org-b), which is how Granted knows which upstream session to use when you switch profiles mid-day.\nThis doesn\u0026rsquo;t replace Granted at all — Granted still handles the per-tab assume step. This is just the upstream SSO login step, and knowing you can target named sessions directly makes the multi-org case a lot less tedious.\nI\u0026rsquo;d been running aws sso login without the flag, which defaults to\u0026hellip; something, and it wasn\u0026rsquo;t always obvious which org it was logging me into. The explicit --sso-session flag is strictly better.\nvia Perrotta.dev\n","date":"April 5, 2026","externalUrl":null,"permalink":"/til/aws-cli-multiple-sso-sessions/","section":"TIL","summary":"I use Granted for per-terminal AWS credential assumptions — it’s great for switching between the multiple work accounts I juggle throughout the day. But I have SSO configured across more than one organization, and every morning I was logging into each one manually, one at a time, like a chump.\nTurns out aws sso login has a --sso-session flag that targets a named session block from ~/.aws/config. So logging into multiple orgs is just two commands:\n","title":"Logging Into Multiple AWS SSO Sessions at Once","type":"til"},{"content":"","date":"April 5, 2026","externalUrl":null,"permalink":"/tags/productivity/","section":"Tags","summary":"","title":"Productivity","type":"tags"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/tags/ai/","section":"Tags","summary":"","title":"Ai","type":"tags"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/categories/conferences/","section":"Categories","summary":"","title":"Conferences","type":"categories"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/tags/flox/","section":"Tags","summary":"","title":"Flox","type":"tags"},{"content":"The schedule I built two weeks ago was a fiction. A useful fiction — it forced real thinking about tradeoffs — but eighteen of the sessions I marked as \u0026ldquo;MUST\u0026rdquo; or \u0026ldquo;HIGH\u0026rdquo; are now links in a YouTube folder I won\u0026rsquo;t open before 2027. The one session that wasn\u0026rsquo;t on any schedule, wasn\u0026rsquo;t announced publicly, and had no recording? That one I can still reconstruct line by line.\nThat\u0026rsquo;s the gap between the conference you plan and the conference you actually attend.\nThursday — PlanetNix and the unexpected room # Kelsey Hightower\u0026rsquo;s \u0026ldquo;Is it time for Nix?\u0026rdquo; talk opened PlanetNix with the kind of skepticism I wasn\u0026rsquo;t expecting from someone whose name is on the event. He traced his own career — sysadmin to DevOps to SRE to platform engineer — and made the point that pivoting is the job, not an anomaly. But then he said something about AI that I wrote down immediately: \u0026ldquo;With this new technology we are going faster, but what are we doing with this new time? Are we spending more time with family? No. Are we getting raises? No. That\u0026rsquo;s the skepticism. The people who are driving it aren\u0026rsquo;t driving it for altruistic reasons.\u0026rdquo;\nThat\u0026rsquo;s not a blanket condemnation of AI. It\u0026rsquo;s a harder question: who captures the productivity gains? It set a tone for the whole conference that I kept returning to.\nSam Fu from Anthropic presented after Kelsey, and the talk was one of those sessions where a single data point rewires your sense of what\u0026rsquo;s normal. Anthropic gives each developer their own pod on its own dedicated node. Not a shared namespace. Not resource quotas. Their own node. I reacted to this the way you\u0026rsquo;d expect — that\u0026rsquo;s absurd, that\u0026rsquo;s expensive — and then spent the rest of the talk recalibrating. Their rationale is that CI should match developer environments exactly, and you can\u0026rsquo;t do that with shared nodes. They use Tailscale as a sidecar inside the dev container, and they\u0026rsquo;ve consolidated their service containers using rootless Docker to avoid the operational overhead of running Docker-in-Docker properly. The philosophical takeaway — things should just work for our users; go through great lengths to ensure it — is easy to agree with in a talk and genuinely hard to act on in an existing platform.\nThen the afternoon ended with the thing I couldn\u0026rsquo;t have put on a schedule: a private roundtable with Kelsey, the Flox CTO and their VP of Engineering, a representative from JPL, and Jesse from my team.\nThe conversation centred on something Kelsey had been thinking through out loud: Kubernetes started with Docker images as the atomic unit. Layers. Composition through FROM. Flox breaks that model — instead of building images that contain everything, you inject Nix packages directly into containers at runtime. The build artifact is no longer an image. It\u0026rsquo;s a Nix package. The CI pipeline ends with Flox assembling and packaging the application.\nWhat this means practically: you stop rebuilding your container because jq got a patch. Common tooling lives in the Nix layer, maintained separately, not triggering your application builds. For Spark specifically, this is significant — the entire application doesn\u0026rsquo;t have to travel through every layer of your image pipeline.\nOn the walk back afterward, Jesse and I were already mapping this to our own build process. The output would still be an image pushed to ECR, which means it\u0026rsquo;s a drop-in from the perspective of every downstream system. But the shape of how you get there changes substantially. We don\u0026rsquo;t have a plan yet, but we have a direction.\nThe JPL engineer in the room was a useful reminder that reproducibility isn\u0026rsquo;t just a developer experience problem. When you\u0026rsquo;re building software for spacecraft, \u0026ldquo;it worked on my machine\u0026rdquo; isn\u0026rsquo;t a philosophy you can entertain.\nFriday — The morning was excellent; I skipped the afternoon # John Willis opened Friday and gave me the most quotable framework of the conference. The standard risk model — Risk = Impact x Likelihood — is no longer sufficient for AI-accelerated threat environments. Willis\u0026rsquo;s revision: Risk = (likelihood ^ velocity) x Authority.\nThe velocity term is doing real work there. An AI agent can go from an announced CVE to active exploitation in under 15 minutes. Humans cannot review, assess, and respond in that window. Willis\u0026rsquo;s conclusion: \u0026ldquo;The human can no longer catch the error before it happens. The system architecture needs to protect against it.\u0026rdquo; If you\u0026rsquo;re still designing security controls around human review cycles, you\u0026rsquo;re building for a threat model that\u0026rsquo;s already obsolete.\nKat Morgan was next with her platform stack live demo — devcontainers, Nix, Docker-in-Docker, K8s-in-containers, KubeVirt, Ceph, Cilium, Dagger, Gitea — which turned out to be genuinely substantive rather than a scope disaster. The key structural idea was the workspace path format: /workspace/{user}/{server}/{namespace}. User can be a human, an AI agent, or a CI runner. Each gets its own subtree with independent group ownership. The consequence of that structure is that you get isolation across all three categories without special-casing any of them.\nThe line I keep coming back to: \u0026ldquo;To make things reliable for AI we have to start making them reliable for humans first.\u0026rdquo; This is the thing I want to put on a slide in our next planning cycle. A lot of the AI workload pressure we\u0026rsquo;re under is pressure to build new infrastructure. Morgan\u0026rsquo;s argument is that you mostly need to finish the infrastructure you already started.\nMaya Singh\u0026rsquo;s session on conversational K8s debugging used Inspektor Gadget with an MCP integration in Cursor. The demo worked the way demos rarely do — she traced a DNS issue live, identified it as a five-dot FQDN lookup hitting CoreDNS unnecessarily, and resolved it. What made it interesting wasn\u0026rsquo;t the outcome, it was the framing: IG has many powerful diagnostic tools, and under pressure, teams consistently pick the wrong ones. The LLM selects the right tool based on the problem description. Engineers who know IG well were apparently worse at tool selection during incidents than the LLM with no prior IG experience. The expertise that makes you fast in normal conditions is the same thing that gives you tunnel vision under pressure.\nThen I went to the vendor hall, had lunch with the AWS team, and skipped the rest of the afternoon.\nI should be more elaborate about this. In the scheduling post I spent several paragraphs explaining why Dustin Kirkland\u0026rsquo;s agentic pipeline supply chain talk was a must-attend. I marked it MUST. And I didn\u0026rsquo;t go. I had a good lunch conversation and the afternoon slipped away. That\u0026rsquo;s what actually happened. It\u0026rsquo;s in the recording folder now.\nSaturday — A direct report\u0026rsquo;s first conference talk, and the room I\u0026rsquo;ll be in next year # Renovate at 1,300 repos turned out to be an interesting topic, boring delivery. The useful technical details: Grafana runs multiple Renovate configs as CronJobs with shared Redis state, splits jobs alphabetically to avoid deduplication overhead, and uses a webhook-triggered Go application to manage scans against PRs. Renovate PRs include changelogs and CVE descriptions inline, which means the person reviewing the PR has the context to make an informed decision without leaving the GitHub UI. There\u0026rsquo;s also a paging setup that alerts when Renovate isn\u0026rsquo;t opening enough PRs — a neat inversion of the usual \u0026ldquo;too many alerts\u0026rdquo; problem.\nThe sidebar: I didn\u0026rsquo;t know you could manually trigger a job from k9s. Learned that in the middle of what was otherwise a slow session. Sometimes conferences work like that.\nAt 12:30 I stayed in the same room for Vinh Nguyen\u0026rsquo;s talk on migrating from Logz.io to self-managed Grafana Loki. This is the part I don\u0026rsquo;t have detailed technical notes on, and that\u0026rsquo;s intentional. I was there as a manager, not as a content consumer.\nVinh is on my team. This was his first conference talk. My notes from that session are four words: \u0026ldquo;Doing a good job, integrated the feedback we provide.\u0026rdquo; That\u0026rsquo;s it. There\u0026rsquo;s nothing else I need to write down. Being in that room was the whole point.\nAt 2:30, the Meta containers-in-containers talk became the technical surprise of the conference. The setup: Meta runs production multi-tenant compute using nested containers. Developers SSH into a login container, which runs Podman so they can start Claude. Then — and this is the part that answered a question my team hadn\u0026rsquo;t quite articulated yet — an iptables rule restricts that inner container to only communicate with a proxy that limits access to the inference server. Not the open internet. Just the inference endpoint.\nThe pattern: nested containers as the agent sandboxing primitive, with iptables as the enforcement layer. The developers can run their own builds inside their containers too — a bind mount of /proc to /proc resolves the RUN statement failures that blocked them initially. They moved from Kaniko (now archived) to BuildKit for the image building layer.\nThis is the architectural answer to the agent isolation question we keep circling. We knew we needed sandboxing for agent workloads. We\u0026rsquo;d been thinking about it as a separate problem from developer environments. Meta has collapsed those into the same pattern.\nThe Saturday panel — Kelsey, Stormy Peters, and James Bayer on AI reshaping infra — ran over time and nobody in the room seemed to mind.\nThree things earned space in my notes.\nFirst, Kelsey on training data collapse: if people stop contributing to Stack Overflow, stop writing blog posts, stop creating the public corpus that models train on, what do the next generation of models train on? \u0026ldquo;Models can\u0026rsquo;t train on their own output.\u0026rdquo; This is a systems problem with a feedback loop that most of the AI discourse ignores.\nSecond, the APIs-as-hints argument: \u0026ldquo;We write our APIs with hints, not instructions. This was never good.\u0026rdquo; Kelsey\u0026rsquo;s point was that AI works better with intent-based interfaces, and that those should have been the standard from the start. We gave up on clear specification in favour of \u0026ldquo;good enough for humans to figure out.\u0026rdquo; Now we\u0026rsquo;re paying for it.\nThird — and this is the one I kept thinking about on the walk back to the hotel — Kelsey on consent: AI is \u0026ldquo;built on our prior knowledge without our consent and sold back to us for $20/month.\u0026rdquo; This isn\u0026rsquo;t a legal argument or a licensing argument. It\u0026rsquo;s a community argument. The open source ecosystem that produced the training data operated under norms that didn\u0026rsquo;t anticipate that use. Whether you think the models are technically in compliance or not, the social contract was violated.\nAfter the panel, I had a two-minute hallway conversation with James Bayer about our Flox adoption plan. His advice was immediate and specific: forget about the Kubernetes integration first. Start with developer workflows. Two sentences. Immediately actionable. Worth more than most of the formal sessions.\nSunday — Shorter than planned, and that was fine # Mark Russinovich\u0026rsquo;s supply chain keynote was a welcome surprise in how direct it was. Microsoft is shipping Sysinternals for Linux (including jcd). KEDA graduated to CNCF after being incubated inside Microsoft. And Russinovich said something that should be in every security review presentation: \u0026ldquo;Not looking at the code is not the flex you think it is.\u0026rdquo;\nThe practical takeaway from this talk was the OSSF Scorecard tool — a CLI that scores repositories for supply chain trustworthiness. The Open Source Security Foundation has 117 members across 16 industries. Running Scorecard on your dependencies is a 20-minute task that gives you a defensible starting position on supply chain posture. We\u0026rsquo;re going to add it to our onboarding checklist for new dependencies.\nEngin Diri from Pulumi had the central tension of Sunday\u0026rsquo;s track in his title: AI platforms without losing engineering principles. The architecture he described uses kserve for model serving, LiteLLM for access management, and agent-sandbox from CNCF for isolation — running on Bottlerocket nodes with skills defined as ConfigMaps. The demo used Open WebUI with specific sandbox skills to handle developer infrastructure queries without requiring any infrastructure setup from the developer. His project code is at dirien/what-is-ai-platform-engineering-and-why-should-you-care if you want to follow along.\nThe honest note: Engin has leaned heavily into LiteLLM, and I\u0026rsquo;ve heard that its support and maintenance has been declining. Worth watching before committing to it as a dependency. The architecture makes sense; the specific tool choice is a conversation I need to have with my team before we adopt it.\nThe Chainguard booth conversation filled in a gap from the week. They\u0026rsquo;re running Trivy and Grype for image scanning — layered, not redundant, with each catching things the other misses. More usefully, someone there had done the work of integrating GPU utilization metrics into Karpenter dashboards. We\u0026rsquo;ve wanted this since we started running AI workloads and kept deprioritizing it. I came away with a concrete approach to take back to the team instead of just another item on the list.\nWhat actually changed because of this conference comes down to three things — two technical, one personal.\nThe container-as-artifact shift is a convergent signal. The Flox roundtable, Kat Morgan\u0026rsquo;s path structure for CI/AI/human parity, and Meta\u0026rsquo;s containers-in-containers work all point at the same thing: the container image as the fundamental build artifact is being renegotiated. Not replaced — nothing at this scale replaces things, it accumulates layers — but the assumptions underneath it are shifting.\nThe AI skepticism is coming from the people who know the most. Kelsey said it twice (the productivity gains question on Thursday, the consent framing on Saturday). Willis gave it a precise technical form (velocity changes the risk calculus, systems need to absorb the consequences). Morgan said it plainest: make things reliable for humans first. These are not people who are anti-AI. They\u0026rsquo;re people who\u0026rsquo;ve thought harder about it than most, and they\u0026rsquo;re all expressing the same category of doubt.\nAnd Vinh. The conference had a lot of good content. The manager moment I\u0026rsquo;ll actually remember was staying in that room at 12:30.\nI wrote in the scheduling post that I have a folder of recordings I haven\u0026rsquo;t opened since 2023. That folder has eighteen new items in it now. I\u0026rsquo;m not going to watch them. The Flox roundtable — unrecorded, unscheduled, forty-five minutes in a hallway meeting room — was worth more than any of them would have been, because it ended with a direction and a concrete next step, not just a presentation I could have read on someone\u0026rsquo;s blog at home.\n","date":"March 23, 2026","externalUrl":null,"permalink":"/posts/2026/03/four-days-eighteen-missed-sessions-and-a-private-roundtable-with-kelsey-hightower-scale-23x-as-it-actually-happened/","section":"Posts","summary":"The schedule I built two weeks ago was a fiction. A useful fiction — it forced real thinking about tradeoffs — but eighteen of the sessions I marked as “MUST” or “HIGH” are now links in a YouTube folder I won’t open before 2027. The one session that wasn’t on any schedule, wasn’t announced publicly, and had no recording? That one I can still reconstruct line by line.\nThat’s the gap between the conference you plan and the conference you actually attend.\n","title":"Four days, eighteen missed sessions, and a private roundtable with Kelsey Hightower: SCALE 23x as it actually happened","type":"posts"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/tags/kubernetes/","section":"Tags","summary":"","title":"Kubernetes","type":"tags"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/tags/nix/","section":"Tags","summary":"","title":"Nix","type":"tags"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/categories/platform-engineering/","section":"Categories","summary":"","title":"Platform-Engineering","type":"categories"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/tags/scale/","section":"Tags","summary":"","title":"SCALE","type":"tags"},{"content":"","date":"March 23, 2026","externalUrl":null,"permalink":"/tags/supply-chain/","section":"Tags","summary":"","title":"Supply-Chain","type":"tags"},{"content":"","date":"March 3, 2026","externalUrl":null,"permalink":"/tags/adguard/","section":"Tags","summary":"","title":"Adguard","type":"tags"},{"content":"","date":"March 3, 2026","externalUrl":null,"permalink":"/tags/dns/","section":"Tags","summary":"","title":"Dns","type":"tags"},{"content":"On a recent trip I kept getting connection failures that needed retrying — pages half-loading, API calls timing out, the usual DNS-smells-wrong experience. It was intermittent enough to be annoying but consistent enough that I knew something was actually broken.\nI narrowed it down to DNS pretty quickly. My GL.iNet MT-3000 travel router was dropping queries or returning nothing for some domains.\nThe culprit turned out to be obvious in retrospect: before leaving I had shut down my Pi-hole servers at home. Those Pi-holes live on my Tailscale network, and my travel router connects back to that network. Somewhere, something was still trying to use them for DNS.\nI tried spinning one of the Pi-holes back up — no change. So last night I actually dug into the GL.iNet admin panel.\nHere\u0026rsquo;s the thing I didn\u0026rsquo;t know: GL.iNet routers run AdGuard Home as the built-in DNS layer, and AdGuard Home has its own upstream DNS configuration that\u0026rsquo;s completely separate from the router\u0026rsquo;s basic DNS settings. You get to it via the AdGuard Home interface, under Settings \u0026gt; DNS settings \u0026gt; Upstream DNS servers.\nBuried in there were two IP addresses: my Pi-hole boxes, hardcoded by their Tailscale IPs — a setting I\u0026rsquo;d configured and forgotten about entirely.\nThe fix was replacing those with something that doesn\u0026rsquo;t depend on my home network being up — 1.1.1.1 (Cloudflare) and 9.9.9.9 (Quad9). I have new Pi-holes now, but they\u0026rsquo;re not on Tailscale yet, so public DNS it is for now.\nThe tricky part was that the GL.iNet web UI has a DNS section in the main router config. I\u0026rsquo;d already checked that — it looked fine. The AdGuard Home upstream config is a completely separate place, only reachable by clicking through to the AdGuard Home admin interface itself. Easy to miss if you don\u0026rsquo;t know it exists.\nIf you use a GL.iNet router with AdGuard Home enabled and your DNS depends on anything on your home network, double-check those upstream DNS settings before you travel.\n","date":"March 3, 2026","externalUrl":null,"permalink":"/til/glinet-adguard-upstream-dns/","section":"TIL","summary":"On a recent trip I kept getting connection failures that needed retrying — pages half-loading, API calls timing out, the usual DNS-smells-wrong experience. It was intermittent enough to be annoying but consistent enough that I knew something was actually broken.\nI narrowed it down to DNS pretty quickly. My GL.iNet MT-3000 travel router was dropping queries or returning nothing for some domains.\nThe culprit turned out to be obvious in retrospect: before leaving I had shut down my Pi-hole servers at home. Those Pi-holes live on my Tailscale network, and my travel router connects back to that network. Somewhere, something was still trying to use them for DNS.\n","title":"GL.iNet's AdGuard Home Hides Upstream DNS Settings in a Non-Obvious Place","type":"til"},{"content":"","date":"March 3, 2026","externalUrl":null,"permalink":"/tags/glinet/","section":"Tags","summary":"","title":"Glinet","type":"tags"},{"content":"","date":"March 3, 2026","externalUrl":null,"permalink":"/tags/homelab/","section":"Tags","summary":"","title":"Homelab","type":"tags"},{"content":"","date":"March 3, 2026","externalUrl":null,"permalink":"/tags/networking/","section":"Tags","summary":"","title":"Networking","type":"tags"},{"content":"","date":"March 3, 2026","externalUrl":null,"permalink":"/tags/pihole/","section":"Tags","summary":"","title":"Pihole","type":"tags"},{"content":"","date":"March 3, 2026","externalUrl":null,"permalink":"/tags/tailscale/","section":"Tags","summary":"","title":"Tailscale","type":"tags"},{"content":"There are 277 sessions at SCALE 23x this year. I know this because I extracted all of them from the schedule webarchive files and scored every single one.\nI\u0026rsquo;m not proud of how long this took. But it surfaced some genuinely interesting tradeoffs — and the pattern of what conflicted with what tells you something real about where platform engineering is right now.\nThe scheduling problem is different when you manage a team # When I was an IC, conference scheduling was mostly about depth. Find the three talks that will blow your mind and plan the rest around them. Everything else is hallway track.\nManaging a platform team changes the calculus. I\u0026rsquo;m still optimizing for my own learning, but I\u0026rsquo;m also scouting for ideas to bring back to the team, watching for trends that will inform our 12-month roadmap, and — honestly — looking for external validation I can use in internal conversations. \u0026ldquo;Kelsey Hightower thinks Nix is the right move for reproducible builds\u0026rdquo; lands differently in a planning meeting than \u0026ldquo;I think Nix is the right move.\u0026rdquo;\nThere\u0026rsquo;s also the network dimension. The hallway conversations at SCALE aren\u0026rsquo;t incidental — for a platform team, the right connection to a Chainguard or Grafana Labs engineer can directly unlock technical help you\u0026rsquo;d otherwise spend weeks getting through support tickets.\nSo I needed a real schedule, not just a vague list of \u0026ldquo;sessions that sound interesting.\u0026rdquo;\nHow I scored everything # For each of the 277 sessions I weighed four things:\nTopic relevance got the most weight. My team owns K8s operations, observability pipelines, CI/CD, IaC, developer experience, supply chain security, and increasingly AI/ML infrastructure. Sessions that touch those directly scored high; adjacent topics scored lower; \u0026ldquo;Introduction to Kubernetes\u0026rdquo; scored near zero regardless of who was presenting.\nSpeaker prestige got significant weight. This is a heuristic I\u0026rsquo;ve come to trust more as I\u0026rsquo;ve gotten more senior: a known speaker at a respected company has more to lose from giving a bad talk. That doesn\u0026rsquo;t mean unknown speakers are bad — some of the best talks I\u0026rsquo;ve seen came from engineers I\u0026rsquo;d never heard of — but when you\u0026rsquo;re choosing between two relevant talks, the speaker signal matters.\nTalk depth I scored on title signals. \u0026ldquo;How we\u0026rdquo; and \u0026ldquo;Lessons from\u0026rdquo; and specific production numbers (\u0026ldquo;1 million incidents\u0026rdquo;, \u0026ldquo;1,300 repos\u0026rdquo;) are green flags. \u0026ldquo;Introduction to\u0026rdquo;, \u0026ldquo;101\u0026rdquo;, \u0026ldquo;Brief Tour\u0026rdquo;, \u0026ldquo;Getting Started\u0026rdquo; are skips regardless of topic. Seven years in this role means foundational content is usually a poor use of conference time.\nUniqueness got the least weight but still mattered. A talk where a Meta engineer describes production containers-in-containers at Meta\u0026rsquo;s scale is giving me something I literally cannot get from a blog post. A general overview of OpenTelemetry concepts is not.\nThe schedule that came out of it # Thursday is the PlanetNix pre-conference day. Kelsey Hightower is doing a 45-minute talk on whether now is actually the time for Nix, and I\u0026rsquo;m treating that as unmissable on speaker signal alone. The rest of the Thursday afternoon lines up nicely in the same room — Stormy Peters on reproducibility as a social contract, then two short PlanetNix sessions on Nix+K8s integration war stories. That leaves a 2.5-hour gap in the middle. I\u0026rsquo;m treating that as expo hall and hallway track time rather than filling it with workshops I\u0026rsquo;d partially pay attention to.\nFriday opens with John Willis at 9am — he co-authored The DevOps Handbook and his conference batting average is genuinely high. Then Kat Morgan doing a live demo of a platform stack that includes devcontainers, Nix, Docker-in-Docker, K8s-in-containers, KubeVirt, Ceph, Cilium, Dagger, and Gitea. The sheer scope of that list is either an ambitious talk or a 60-minute incident, and either way I want to be there. The afternoon anchors on Dustin Kirkland (SVP at Chainguard, formerly Google Cloud Distinguished Engineer) talking about agentic pipelines for OS supply chains. That\u0026rsquo;s supply chain + AI from someone with serious operational credibility.\nSaturday ends with a panel that\u0026rsquo;s the clearest must-attend of the entire conference: Kelsey Hightower, Stormy Peters, James Bayer, and Ron Efroni on how AI is reshaping infra and engineering. Four of the most thoughtful people in cloud-native and open source on one stage. I\u0026rsquo;ll be in that room early.\nSunday has the Mark Russinovich keynote on OSS supply chain security. He\u0026rsquo;s CTO of Azure and created Sysinternals. For supply chain content specifically, hearing what Microsoft is actually doing at scale beats any number of framework talks.\nThe tradeoffs are the interesting part # Three conflicts are worth naming because they reveal something about the current state of the field.\nThe Friday 2pm problem. Dustin Kirkland\u0026rsquo;s supply chain talk runs 2:00–3:00 in Ballroom F. At the same time, Leigh Capili is doing a deep technical dive into Flux internals — git push to etcd: An Anatomy of Flux — in Ballroom B. Leigh Capili is one of the people who actually knows how GitOps works at the implementation level, and the talk looks architecturally dense.\nI\u0026rsquo;m choosing Kirkland because supply chain security is more strategically urgent for my team right now. But the fact that these two are competing tells you something: the \u0026ldquo;how do we manage K8s the right way\u0026rdquo; question is now multi-dimensional. It\u0026rsquo;s not just GitOps vs. not-GitOps. It\u0026rsquo;s GitOps and supply chain provenance and agentic pipelines, and you can\u0026rsquo;t cover all of it in one afternoon.\nSaturday 11:15am. I\u0026rsquo;m going to Renovate Your Life: How We Automated Dependency Updates for 1,300 Repos by Dimitrios Sotirakis and Philip Hope. Then at 12:30 I\u0026rsquo;m staying in the same room for We Migrated to Loki and Survived: Lessons from the Trenches — presented by Vinh Nguyen, a member of my team, in his first ever conference talk.\nThe talk covers ZipRecruiter\u0026rsquo;s migration from Logz.io to self-managed Grafana Loki — cost-driven, which means the architecture decisions weren\u0026rsquo;t just technical, they were financial. The abstract promises cardinality challenges, production incidents, and an honest before/after cost comparison. That\u0026rsquo;s exactly the kind of content I\u0026rsquo;d have picked on merit anyway.\nThe competing session is GPU Sharing Done Right, which is directly relevant to our current AI workloads. On pure content value, that\u0026rsquo;s a real tradeoff. But other folks from my team will be at the conference and can brief me afterward, and if it\u0026rsquo;s recorded I\u0026rsquo;ll watch it.\nWhat actually made this easy: showing up for your people matters. Vinh has never spoken at a conference before. Being in that room isn\u0026rsquo;t about the content — it\u0026rsquo;s about being the kind of manager who\u0026rsquo;s present for the moments that count to the people on their team. The debrief I get from Vinh afterward will be worth more than any talk anyway. It runs against Zero Trust for Linux Admins with Open-Source IAM (Thomas Cameron, Room 101) and Rage Against the Machine: Fighting AI Complexity with Kubernetes Simplicity (Paul Yu, Ballroom A).\nRenovate wins because dependency automation at 1,300 repos is a production war story with a specific scale number, which is exactly the format I trust. But I\u0026rsquo;m a little annoyed at myself about skipping Thomas Cameron. Zero Trust IAM for Linux admins is something my team keeps pushing back because it feels like \u0026ldquo;future work\u0026rdquo; — and I suspect a conference session is the thing that would make it feel concrete enough to actually schedule.\nThe Sunday 11:45am bloodbath. This is genuinely brutal. These all run simultaneously:\nEngin Diri (Pulumi) on building AI platforms without losing engineering principles Justin Garrison on the state of immutable Linux Noam Levy on profiling as the fourth observability signal Hrittik Roy on taming LLM resource usage with K8s Nathan Handler on building a unified cloud inventory Dawn Foster (Linux Foundation / CHAOSS) on OSS sustainability and corporate power dynamics I\u0026rsquo;m going to Engin Diri because the topic is the closest match to what I\u0026rsquo;m actively trying to figure out. My team is under pressure to move faster on AI platform capabilities, and I\u0026rsquo;m trying to hold the line on platform quality. That tension is real and I want to hear someone reason through it carefully.\nBut Justin Garrison\u0026rsquo;s immutable Linux talk is the one I\u0026rsquo;ll regret most. He\u0026rsquo;s a former AWS EKS engineer with a track record of substantive, opinionated talks rather than surveys. And immutable OS infrastructure is the thing I keep saying \u0026ldquo;we\u0026rsquo;ll get to that\u0026rdquo; about. That\u0026rsquo;s a bad sign.\nWhat the conflicts actually tell you # There\u0026rsquo;s a pattern here. In previous years, the platform engineering conference schedule conflict was usually \u0026ldquo;which K8s operations talk\u0026rdquo; or \u0026ldquo;which observability vendor talk.\u0026rdquo; This year the conflicts are across dimensions:\nSupply chain provenance vs. GitOps depth AI platform architecture vs. immutable infrastructure Dependency automation vs. zero trust IAM The field has gotten wide enough that a platform team manager can no longer track it all. That\u0026rsquo;s not a complaint — it\u0026rsquo;s a sign that platform engineering has matured into something with real breadth. But it does mean that individual learning from conferences has diminishing returns unless you\u0026rsquo;re selective about which sub-problems you\u0026rsquo;re trying to make progress on.\nFor my team, the through-line is: supply chain integrity, AI workload operations, and developer experience (in that order). The schedule I built reflects that, which means I\u0026rsquo;m systematically under-investing in security depth (the SunSecCon track) and over-indexing on strategic talks that give me ammunition for internal conversations.\nThat\u0026rsquo;s a defensible tradeoff for a manager. It might be the wrong tradeoff for an IC on my team.\nThe thing I\u0026rsquo;m most uncertain about # I made a lot of calls based on \u0026ldquo;this is likely to be recorded\u0026rdquo; as justification for skipping something good. Leigh Capili\u0026rsquo;s Flux talk, Christian Hernandez\u0026rsquo;s AI readiness talk, Justin Garrison\u0026rsquo;s immutable Linux talk — all of those I\u0026rsquo;ve essentially punted to \u0026ldquo;watch the recording.\u0026rdquo;\nBut I never actually watch the recordings. I have a folder of \u0026ldquo;conference recordings to watch\u0026rdquo; that I haven\u0026rsquo;t opened since 2023.\nSo either I should stop using that as a justification, or I should build a real system for doing the post-conference review. I haven\u0026rsquo;t figured out which.\nSCALE 23x runs March 5–8, 2026 at the Pasadena Convention Center.\n","date":"March 1, 2026","externalUrl":null,"permalink":"/posts/2026/03/four-days-277-sessions-one-brutal-sunday-time-slot-scheduling-scale-23x-as-a-platform-team-manager/","section":"Posts","summary":"There are 277 sessions at SCALE 23x this year. I know this because I extracted all of them from the schedule webarchive files and scored every single one.\nI’m not proud of how long this took. But it surfaced some genuinely interesting tradeoffs — and the pattern of what conflicted with what tells you something real about where platform engineering is right now.\nThe scheduling problem is different when you manage a team # When I was an IC, conference scheduling was mostly about depth. Find the three talks that will blow your mind and plan the rest around them. Everything else is hallway track.\n","title":"Four days, 277 sessions, one brutal Sunday time slot: scheduling SCALE 23x as a platform team manager","type":"posts"},{"content":"","date":"March 1, 2026","externalUrl":null,"permalink":"/tags/scheduling/","section":"Tags","summary":"","title":"Scheduling","type":"tags"},{"content":"A coworker dropped /copy in our work Slack yesterday and I had to try it immediately. It\u0026rsquo;s a Claude Code slash command that copies Claude\u0026rsquo;s last response straight to your clipboard as markdown.\nBefore finding this, my workflow for grabbing a generated code snippet or shell command was embarrassingly manual — select text in the terminal, hope I got the boundaries right, paste it somewhere. Now I just type:\n/copy And the whole response lands in my clipboard, formatting intact — including code blocks. This is especially useful when Claude generates something multi-part, like a function plus its tests or a sequence of shell commands, where careful selection across scroll boundaries used to be the only option.\nPython code is where this really pays off. Terminal selection is sloppy about whitespace, and in Python, indentation is the syntax — a misaligned block is broken code. /copy pulls the raw markdown, so the indentation arrives exactly as Claude wrote it.\nThe markdown formatting means code blocks stay as code blocks when you paste into a GitHub issue. (Slack\u0026rsquo;s markdown behavior from clipboard is inconsistent, so results may vary there.)\nYou can see all available slash commands by running /help. I suspect there are others I\u0026rsquo;ve been missing.\nTested on macOS with Claude Code 2.1.50.\n","date":"February 19, 2026","externalUrl":null,"permalink":"/til/claude-code-copy-command/","section":"TIL","summary":"A coworker dropped /copy in our work Slack yesterday and I had to try it immediately. It’s a Claude Code slash command that copies Claude’s last response straight to your clipboard as markdown.\nBefore finding this, my workflow for grabbing a generated code snippet or shell command was embarrassingly manual — select text in the terminal, hope I got the boundaries right, paste it somewhere. Now I just type:\n/copy And the whole response lands in my clipboard, formatting intact — including code blocks. This is especially useful when Claude generates something multi-part, like a function plus its tests or a sequence of shell commands, where careful selection across scroll boundaries used to be the only option.\n","title":"Claude Code's /copy Command","type":"til"},{"content":"At one of the Toronto Perl Mongers meetings Olaf was demonstrating something or other and during the demonstration he used Alfred to search metacpan. I\u0026rsquo;ve been a LaunchBar user for a long time, but the Alfred plugin offered auto completion, something my LaunchBar search didn\u0026rsquo;t have. He proceeded to show a couple of the other plugins (which I don\u0026rsquo;t recall at this point) and I decided I needed to try it out too.\nI had a few issues with it. I would often have to correct my DuckDuckGo searches because the auto complete would include extra words that I didn\u0026rsquo;t want. On other occasions the metacpan search would end up being a DuckDuckGo search because I was typing faster than the plugin could handle. These were annoying but I lived with them.\nAt The Perl Toolchain Summit sitting beside Olaf we noticed a couple other plugins that he was using:\nmeta::cpan (handlename) GitHub (Gregor Harlan) Travis CI for Alfred (Fabio Niephaus) Dash (Kapeli) These plugins are great, but the annoyance of retyping was wearing me down. I decided to start up LaunchBar again and see if it did any better with searching. Sure enough there were never any extra words, I had no issues where I was out typing it, but I was spoiled by the Alfred plugins.\nSome quick searching and I came across LaunchBar plugin for GitHub by Brooks Swinnerton, and it\u0026rsquo;s fantastic. It was miles beyond the Alfred plugin and much faster. This left me looking at the metacpan search I was using, and it\u0026rsquo;s lack of auto completion. I started digging through the GitHub plugin and reading the LaunchBar Developer Action documentation and decided I could write my own.\nI started off with a simple JavaScript action and it worked immediately and responded quickly. I then remodeled the action architecture after the GitHub one, because the code is cleaner, backward compatible with previous versions of macOS, provides testing and I added build automation with Travis.\nAll of which leads to this announcement that the LaunchBar MetaCPAN Action is now available. Right now it searches MetaCPAN with auto completion, but I have plans to add more functionality.\n","date":"June 25, 2019","externalUrl":null,"permalink":"/posts/2019/06/announcing-metacpan-launchbar-action/","section":"Posts","summary":"At one of the Toronto Perl Mongers meetings Olaf was demonstrating something or other and during the demonstration he used Alfred to search metacpan. I’ve been a LaunchBar user for a long time, but the Alfred plugin offered auto completion, something my LaunchBar search didn’t have. He proceeded to show a couple of the other plugins (which I don’t recall at this point) and I decided I needed to try it out too.\n","title":"Announcing MetaCPAN LaunchBar Action","type":"posts"},{"content":"","date":"June 25, 2019","externalUrl":null,"permalink":"/categories/javascript/","section":"Categories","summary":"","title":"JavaScript","type":"categories"},{"content":"","date":"June 25, 2019","externalUrl":null,"permalink":"/categories/macos/","section":"Categories","summary":"","title":"MacOS","type":"categories"},{"content":"","date":"June 25, 2019","externalUrl":null,"permalink":"/categories/perl/","section":"Categories","summary":"","title":"Perl","type":"categories"},{"content":"","date":"June 19, 2019","externalUrl":null,"permalink":"/tags/conferences/","section":"Tags","summary":"","title":"Conferences","type":"tags"},{"content":"This week has been The Perl Conference in Pittsburgh or TPCiP (a tough acronym for the forearms to write). For some reason flights from Toronto to Pittsburgh all require a layover somewhere, and the shortest flight is 7 hours. Of course this does not include the time it takes to get to and from the airport, nor customs and security clearance. The drive is only 6 hours.\nThe Sessions # I always intend to take notes as the presentation are happening. This rarely happens as I get more drawn into the presentation. It\u0026rsquo;s either pay attention or take notes, and I\u0026rsquo;d rather pay attention.\nHappy Campers: Lessons Learned from Scouting\u0026rsquo;s Premier Leadership Course # This talk discusses the parallelisms between open source communities (focusing on the Perl Community) and the Boy Scouts of America. Both of which need to evolve with the changing times.\nChris relates his experience attending leadership courses to demonstrate the similarities, and how the Boy Scouts are handling the challenges.\nChris\u0026rsquo;s message echoed parts of what Sawyer X said during his talk Perl 5: The Past, The Present, and One Possible Future when calling for a clear and focused direction for Perl 5.\nOrganized Development with tmux # While I use tmux daily and am fairly fluent with it, if I learn one thing from this talk the return on investment is huge.\nSure enough within the first 10 minutes, Doug discusses the use of last-window, which I didn\u0026rsquo;t know existed. Here I\u0026rsquo;ve been hunting and next/prev between sessions like a heathen. I immediately started updating my tmux.conf. Turns out I had the default for last-window bound to another key and hadn\u0026rsquo;t rebound it. Leading to a clean up of my tmux.conf.\nNon-trivial jq # jq is a very useful tool for command line work with JSON. Unfortunately its syntax can be somewhat cryptic. Being at a conference and having the availability of someone with examples and explanations as to what\u0026rsquo;s going on helped with my understanding.\nConfessions of a Space Cadet # genehack is a keyboard nerd too. This talk was an introduction into the world of mechanical keyboards and could be the first step down the rabbit hole for a number of people.\ngenehack describes the different layouts of keyboards, including some that I would call fairly non-standard. As well as explaining the different switching mechanisms that make up the rainbow of Cherry MX switches.\nRegexp Mini Tutorial: Assertions # Abigail\u0026rsquo;s talks are always very informative, and any new nuggets of information on writing regular expressions is always welcome. This talk did not disappoint, I\u0026rsquo;d not heard of the \\K directive and the examples given during the talk will make for some nice rewrites in a couple of projects.\nThe \\K directive tells the regular expression engine to forget what it just matched and continue from the current position to try and match the rest of the pattern. The primary use I see for this is when using anchor text in a substitution. No longer is putting back the original anchor text required.\nmy $text = \u0026#39;this is one two two many\u0026#39;; $text =~ s/ (two) \\s+ two \\s+ /$1 too /gx becomes:\nmy $text = \u0026#39;this is one two two many\u0026#39;; $text =~ s/ two \\s+ \\K two \\s+ /too /g I Never Metaphor I Didn\u0026rsquo;t Like: How Cognitive Linguistics Can Help You Be A (More) Bad-ass Developer # Unfortunately video of this talk has not been posted. I\u0026rsquo;ve contacted Chris to see if he knows why, but he hasn\u0026rsquo;t been informed as to why either.\nChris walks through what metaphors are and the constructs that comprise them. While much of the linguistics discussion is over my head, Chris brings the discussion back around to demonstrate how we use metaphors every day in the computing environment. From user interface design to communicating code details between teams of developers.\nThis talk is designed to lead to discussion in the audience. Those discussions end up supporting the points that Chris makes during the talk.\nChris has given this talk previously and the video is available here.\nReadin\u0026rsquo; Rust # I\u0026rsquo;ve heard plenty about the Rust language but never really looked into it. I\u0026rsquo;m a firm believer that learning a different language strengthens the understanding of those I already know.\nUnfortunately some last minute CSS changes led to a presentation that\u0026rsquo;s syntax highlighting left the slides unreadable at a distance.\nThis is one of the talks that reviewing the slides later would be informative.\nCompleteShell - Tab Completion for Every Shell # Ingy döt Net has been working on creating tab completion for programs that don\u0026rsquo;t have it. At PTS 2019 he was demonstrating completion for cpanm which included full distribution name completion on the command line. Really impressive, so I was looking forward to how far along the project is.\nThe goal of the project is provide a simple mechanism for developers to add the completion files and man pages for their command line tools. A DSL has been defined that when used with the tool generates the files required to add completion. The project includes a number of repositories for different commands, but there is still a lot of work to do.\nC\u0026rsquo;mon Git Happy # I consider myself an intermediate git user. I can get myself out of trouble, I\u0026rsquo;ve created and used many a release process, and I know that I don\u0026rsquo;t know everything. Whenever genehack speaks on git though, no matter what level, it needs to be seen. This talk, while short, focuses on using and maintaining the git graph. genehack takes a tour of aliases, configuration settings, and commands that help keep it clean.\nThe One I didn\u0026rsquo;t See \u0026ndash; Perl Out Loud # I did not attend this talk, but immediately after everyone was talking about the fantastic job that @yomilly did. As soon as I returned home this talk went on the main TV and I watched it with my family.\n@yomilly is a great speaker Speech to text technology has come a long way Be considerate to the names given to methods and variables. There\u0026rsquo;s no need for abbreviations. Abbreviations hinder accessibility, as well they can affect clarity. Lightning Talks # There are a lot of different lightning talks on various topics, some related to perl, some related to other aspects of programming, and some with nothing at all to do with either.\nTo under or not to undef # Cees revised his talk from Toronto Perl Mongers and presented it as a shorter lightning talk. While lightning talks at TPM don\u0026rsquo;t really have a time limit, and tend to lead to lots of discussion, at TPC, they must end on time and have zero discussion afterwards.\nThis shortened talk gets right to the point of the differences between $var = undef and undef($var) and why you might want to use one over the other, but mostly shouldn\u0026rsquo;t care. The talk includes examples with technical details that help explain the points.\nFountain Pens # Under the category of nothing to do with programming but really interesting is this talk by Mike Fragassi on fountain pens is really well done. I am not one that likes my penmanship but I do appreciate a nice writing utensil and fountain pens are cool. Might pick up one of the cheaper models he mentions to try it out.\nA New Name for Perl # I\u0026rsquo;ll admit, when Ingy döt Net stood up on stage and stated that this was the topic of his talk, I was prepared to hear a lot of backlash from the audience. Surprisingly there was none at all. Ingy proposes that Perl be the name of the community and the language family, while perl 5 and perl 6 be the names of the implementations of the language.\nSumming it up # Of course the best part of every conference is the time spent hanging out with friends. Be it in unsuspecting dive bars, or finally making it into the taco restaurant on the third attempt, or being kicked out of a friend\u0026rsquo;s room at 1am after a night of board games because they have to give a talk the next morning.\n","date":"June 19, 2019","externalUrl":null,"permalink":"/posts/2019/06/tpcip-2019-wrap-up/","section":"Posts","summary":"This week has been The Perl Conference in Pittsburgh or TPCiP (a tough acronym for the forearms to write). For some reason flights from Toronto to Pittsburgh all require a layover somewhere, and the shortest flight is 7 hours. Of course this does not include the time it takes to get to and from the airport, nor customs and security clearance. The drive is only 6 hours.\nThe Sessions # I always intend to take notes as the presentation are happening. This rarely happens as I get more drawn into the presentation. It’s either pay attention or take notes, and I’d rather pay attention.\n","title":"TPCiP 2019 Wrap Up","type":"posts"},{"content":"","date":"June 5, 2019","externalUrl":null,"permalink":"/tags/analytics/","section":"Tags","summary":"","title":"Analytics","type":"tags"},{"content":"After rebuilding this site and my work site, I wanted a view into whether people were visiting the sites, and if they are, which pages they were interested in.\nI have simple needs:\nHow many people are visiting When are they visiting What are they visiting If they\u0026rsquo;re referred from another site, which I don\u0026rsquo;t want to know anything else, nor do I want to give any of my visitor details to Google. When I started looking into alternatives I came across It\u0026rsquo;s not me, Google, it\u0026rsquo;s you - from GA to Fathom by Jeff Geerling. Those who work with Ansible, may recognize him for his Ansible roles or his book (Ansible for DevOps). Both are highly recommended.\nAs expected Jeff\u0026rsquo;s solution includes an Ansible role to install Fathom on an existing system. This works great if you\u0026rsquo;re dedicating a system, but using a shared system I prefer to host my services as containers, with nginx proxying.\nThese are the steps I took to get fathom up and running in a container for my sites.\nStarting with a docker-compose.yml, even though this is a single service and a relatively simple one I find that creating a docker-compose.yml file helps to document the configuration, allows for easy rebuilding, and container interaction. Plus I don\u0026rsquo;t have to remember all the command line switches.\nversion: \u0026#34;3.4\u0026#34; services: fathom: image: usefathom/fathom:latest restart: unless-stopped ports: - \u0026#34;8081:8080\u0026#34; volumes: - type: bind source: ./.env target: /app/.env read_only: true The Fathom server runs on port 8080, and as this port is popular, and likely to conflict, this configuration uses port 8081. Of course the port number can be any available port.\nFathom supports configuration through a .env file, but doesn\u0026rsquo;t seem to pull the values directly from the environment. I created a .env file alongside the docker-compose.yml file for docker to load. The compose file mounts the .env it into /app so that fathom cli itself will use it.\nFATHOM_GZIP=true FATHOM_DEBUG=true FATHOM_DATABASE_DRIVER=\u0026#34;postgres\u0026#34; FATHOM_DATABASE_NAME=\u0026#34;fathom\u0026#34; FATHOM_DATABASE_USER=\u0026#34;fathom\u0026#34; FATHOM_DATABASE_PASSWORD=\u0026#34;\u0026#34; FATHOM_DATABASE_HOST=\u0026#34;172.18.0.1\u0026#34; FATHOM_DATABASE_SSLMODE=\u0026#34;disable\u0026#34; FATHOM_SECRET=\u0026#34;must not leave this in public\u0026#34; I have a shared PostgreSQL instance for all sites. Being able to use it as the data store is a huge bonus. The FATHOM_DATABASE_HOST setting points to the docker IP address that\u0026rsquo;s used to access the host system. This way all communication to the database is within the host.\nNow that the application is configured, create a specific database to hold the fathom data. Make sure the database name, user name and password match those in the application configuration.\ncreatedb fathom psql fathom Now create the fathom user, and update that database so that the fathom user is the owner.\ncreate user fathom; alter database fathom owner to fathom; When fathom starts it will apply migrations to the database automatically so that the structure is in sync with the applications requirements.\n","date":"June 5, 2019","externalUrl":null,"permalink":"/posts/2019/06/analytics-without-google/","section":"Posts","summary":"After rebuilding this site and my work site, I wanted a view into whether people were visiting the sites, and if they are, which pages they were interested in.\nI have simple needs:\nHow many people are visiting When are they visiting What are they visiting If they’re referred from another site, which I don’t want to know anything else, nor do I want to give any of my visitor details to Google. When I started looking into alternatives I came across It’s not me, Google, it’s you - from GA to Fathom by Jeff Geerling. Those who work with Ansible, may recognize him for his Ansible roles or his book (Ansible for DevOps). Both are highly recommended.\n","title":"Analytics without Google","type":"posts"},{"content":"","date":"June 5, 2019","externalUrl":null,"permalink":"/categories/devops/","section":"Categories","summary":"","title":"DevOps","type":"categories"},{"content":"","date":"June 5, 2019","externalUrl":null,"permalink":"/tags/docker/","section":"Tags","summary":"","title":"Docker","type":"tags"},{"content":"","date":"June 5, 2019","externalUrl":null,"permalink":"/tags/full-stack/","section":"Tags","summary":"","title":"Full Stack","type":"tags"},{"content":"I\u0026rsquo;m heading to PgCon 2019, this will be my first time attending a PostgreSQL specific conference. A lot of the work that I do involves PostgreSQL in some way, be it as a developer, architecting database solutions, or digging into application performance issues.\nI\u0026rsquo;ve been to a number of Toronto PostgreSQL User Group meetings and through discussions with others at the meeting, this is the conference to go to for technical information. I have opted to attend the Unconference which takes place the day before the talks start. It sounds like an interesting concept where the content of the day is determined by the attendees.\nThe formal schedule includes a number of talks that I\u0026rsquo;m really interested in and topics cover a wide range of interests.\n","date":"May 8, 2019","externalUrl":null,"permalink":"/posts/2019/05/heading-to-pgcon-2019/","section":"Posts","summary":"I’m heading to PgCon 2019, this will be my first time attending a PostgreSQL specific conference. A lot of the work that I do involves PostgreSQL in some way, be it as a developer, architecting database solutions, or digging into application performance issues.\nI’ve been to a number of Toronto PostgreSQL User Group meetings and through discussions with others at the meeting, this is the conference to go to for technical information. I have opted to attend the Unconference which takes place the day before the talks start. It sounds like an interesting concept where the content of the day is determined by the attendees.\n","title":"Heading to PgCon 2019","type":"posts"},{"content":"","date":"May 8, 2019","externalUrl":null,"permalink":"/categories/postgresql/","section":"Categories","summary":"","title":"Postgresql","type":"categories"},{"content":"","date":"April 30, 2019","externalUrl":null,"permalink":"/tags/devops/","section":"Tags","summary":"","title":"DevOps","type":"tags"},{"content":"","date":"April 30, 2019","externalUrl":null,"permalink":"/tags/metacpan/","section":"Tags","summary":"","title":"MetaCPAN","type":"tags"},{"content":"My first Perl Toolchain Summit has come to a close and it was amazing and productive. I spent a lot of my time containerizing MetaCPAN, talking and helping other groups with Docker solutions for their projects, and laying out future work and directions. All of which would have been difficult if it weren’t for the summit bringing everyone together.\nA workflow to create base level images was developed with automatic generation and uploading to docker hub via Travis and Docker Hub. These details were shared with all in attendance who were interested.\nThe first MetaCPAN hosted site GitHub meets CPAN has been ported into a container and tested locally. While the plan and framework for initial deployment and management via Ansible has been laid down, the site will be going live shortly.\nImages were created for the other sites including the main site. These images are available for the developers to use in their local development with plans to roll out to the production services as the Ansible environment and support procedures are fleshed out. The ground work for the Ansible configuration has been laid, with more roles and playbooks set to be developed as needed.\nAll these new containers and processes have changed the way the development of MetaCPAN happens. No longer requiring the use of Vagrant and with better container management. This will allow for easier on-boarding of new developers, and better control overall. Working together we were able to ensure that the site continued to operate in this new setup. Documentation is still outstanding but is being worked on.\nA new container to integrate logging with honeycomb.io was integrated into the stack. This will allow us to forward all logs for all sites and use the honeycomb toolkit to interrogate them, providing better problem determination and resolution.\nThe future plans are to deploy to severs using Ansible and docker-compose. A further migration to Nomad using nomad-compose will be used for complete container management.\nNone of this would have been possible without the help of the amazing MetaCPAN team, the organizers and sponsors of the Perl Toolchain Summit.\nBooking.com, cPanel, MaxMind, FastMail, ZipRecruiter, Cogendo, Elastic, OpenCage Data, Perl Services, Zoopla, Archer Education, OpusVL, Oetiker+Partner, SureVoIP, YEF.\n","date":"April 30, 2019","externalUrl":null,"permalink":"/posts/2019/04/returning-from-pts-2019/","section":"Posts","summary":"My first Perl Toolchain Summit has come to a close and it was amazing and productive. I spent a lot of my time containerizing MetaCPAN, talking and helping other groups with Docker solutions for their projects, and laying out future work and directions. All of which would have been difficult if it weren’t for the summit bringing everyone together.\nA workflow to create base level images was developed with automatic generation and uploading to docker hub via Travis and Docker Hub. These details were shared with all in attendance who were interested.\n","title":"Returning from PTS 2019","type":"posts"},{"content":"It’s with great honour that I will be attending the Perl Toolchain Summit in Marlow England. Taking place from April 25 to 28.\nOlaf, Leo and I have been discussing MetaCPAN infrastructure and with the work that\u0026rsquo;s been done in getting docker containers running for developers migrating that to running containers on the existing infrastructure.\nImages will be maintained using the combination of GitHub, Travis, and Docker Hub as outlined in this blog post by Vaidik Kapoor on Medium.\nThe outlook is to move to an automated container management system (likely Nomad) at some point but for now just containerizing. This should help us better manage the application.\nBeing able to provision new hardware and maintain the existing environments with Ansible is also on my personal agenda. I\u0026rsquo;ve started some of the layout work already but haven\u0026rsquo;t pushed anything yet.\nDuring meta::hack 3 I started working on the OpenAPI implementation for MetaCPAN. I\u0026rsquo;ve done more work on the API since then, but need to fix up some tests in order to deploy it. Being able to sit with the other MetaCPAN developers should help expedite this.\nI\u0026rsquo;d like to thank the organizers of the event and the many sponsors for making this possible:\nBooking.com, cPanel, MaxMind, FastMail, ZipRecruiter, Cogendo, Elastic, OpenCage Data, Perl Services, Zoopla, Archer Education, OpusVL, Oetiker+Partner, SureVoIP, YEF.\n","date":"April 23, 2019","externalUrl":null,"permalink":"/posts/2019/04/heading-to-pts-2019/","section":"Posts","summary":"It’s with great honour that I will be attending the Perl Toolchain Summit in Marlow England. Taking place from April 25 to 28.\nOlaf, Leo and I have been discussing MetaCPAN infrastructure and with the work that’s been done in getting docker containers running for developers migrating that to running containers on the existing infrastructure.\nImages will be maintained using the combination of GitHub, Travis, and Docker Hub as outlined in this blog post by Vaidik Kapoor on Medium.\n","title":"Heading to PTS 2019","type":"posts"},{"content":"","date":"April 17, 2019","externalUrl":null,"permalink":"/tags/dotfiles/","section":"Tags","summary":"","title":"Dotfiles","type":"tags"},{"content":"This morning was spent cleaning up my dotfiles. I\u0026rsquo;ve often had issues with the ~/.config directory when using plugin managers with both vim-plug for NeoVim and fisher for fish shell. These plugins self update, and using them on multiple systems often requires checking in changes from upgrading to the latest versions when they do.\nAdding the files to .gitignore might be a solution, but that would still require me to do the installation on every system I want to use them on. The solution I came up with was to move the nvim and fish directories outside of the config directory and create a Makefile (or in the case of fish recursive Makefiles) to manage the linking.\nUsing this system, I should be able to remove config from my dotfiles repository and have the Makefiles create the required links. The ultimate goal is each \u0026ldquo;service\u0026rdquo; will have their own Makefile, with a dotfiles Makefile that will execute each by name. Then a simple playbook for my Ansible Workspace workflow to execute them based on host variables.\n","date":"April 17, 2019","externalUrl":null,"permalink":"/posts/2019/04/dotfiles-and-makefiles/","section":"Posts","summary":"This morning was spent cleaning up my dotfiles. I’ve often had issues with the ~/.config directory when using plugin managers with both vim-plug for NeoVim and fisher for fish shell. These plugins self update, and using them on multiple systems often requires checking in changes from upgrading to the latest versions when they do.\nAdding the files to .gitignore might be a solution, but that would still require me to do the installation on every system I want to use them on. The solution I came up with was to move the nvim and fish directories outside of the config directory and create a Makefile (or in the case of fish recursive Makefiles) to manage the linking.\n","title":"dotfiles and Makefiles","type":"posts"},{"content":"","date":"April 17, 2019","externalUrl":null,"permalink":"/tags/systems/","section":"Tags","summary":"","title":"Systems","type":"tags"},{"content":"","date":"April 17, 2019","externalUrl":null,"permalink":"/tags/unix/","section":"Tags","summary":"","title":"UNIX","type":"tags"},{"content":"DC-Baltimore Perl Workshop has come to a close and the organizers should be proud of what they accomplished. The comment was made at the after conference dinner that they make it looks easy when really we all know it\u0026rsquo;s not.\nI spent a lot of the hallway track talking to different people about technologies that weren\u0026rsquo;t perl but shared amongst us all, and a lot of time talking to @genehack. Getting out and talking geek was my intention for this conference and for that to me made it successful.\nI had planned all along to give a talk on Fixup and Autosquash, it\u0026rsquo;s one of those things that time and time again I run into developers who don\u0026rsquo;t know about this combination of git commands, or know of them but not understanding how to put them to good use. I did the talk as a Lightning Talk, as I didn\u0026rsquo;t expect there to be enough content to fill a full half hour. After which @genehack advised me that fleshing out the examples a bit more could easily fill the 30 minutes.\nI have been giving numerous talks to our local perl Mongers group, and by giving this talk I\u0026rsquo;ve learned that what works with a small group or via a streamed talk, doesn\u0026rsquo;t work with a larger group staring at a screen on a wall. While all of my talks tend to include code examples that syntax and code highlighting can display and separate quite well, shouldn\u0026rsquo;t be used as the only mechanism in a larger setting. This is something that I\u0026rsquo;ll work on.\nOne thing I do really like is after a talk people sharing their experiences or another mechanism to do something I\u0026rsquo;ve discussed. In this instance it was the use of an empty initial commit to allow easier rebasing of the first commit. It was pointed out that the --root option allows for rebasing an initial commit.\n\u0026ldquo;git rebase [-i] \u0026ndash;root\n$tip\u0026rdquo; can now be used to rewrite all the history leading to \u0026ldquo;$tip\u0026rdquo; down to the root commit.\nFrom the git release notes for version 1.7.12\nThis is not something I\u0026rsquo;ve had a chance to use yet, but will give it a try.\nAs I said all in all I think the conference was a success, and I\u0026rsquo;m grateful to the organizers for putting the conference together.\n","date":"April 7, 2019","externalUrl":null,"permalink":"/posts/2019/04/dcbpw-2019-wrap-up/","section":"Posts","summary":"DC-Baltimore Perl Workshop has come to a close and the organizers should be proud of what they accomplished. The comment was made at the after conference dinner that they make it looks easy when really we all know it’s not.\nI spent a lot of the hallway track talking to different people about technologies that weren’t perl but shared amongst us all, and a lot of time talking to @genehack. Getting out and talking geek was my intention for this conference and for that to me made it successful.\n","title":"DCBPW 2019 Wrap Up","type":"posts"},{"content":"","date":"April 7, 2019","externalUrl":null,"permalink":"/tags/talks/","section":"Tags","summary":"","title":"Talks","type":"tags"},{"content":"Today I’m heading to the DC-Baltimore Perl Workshop. This conference has been on my list of wanting to go to for years. With DC-Baltimore hosting TPC two years ago, they didn’t have the workshop and then skipped last year but as soon as I heard they were hosting it again this year I jumped to attend.\nUnfortunately some of my friends that would normally make this event aren’t going to make it this year but I will catch up with them at TPC in Pittsburgh.\nThe schedule looks well packed and I will have a full day.\n","date":"April 5, 2019","externalUrl":null,"permalink":"/posts/2019/04/heading-to-dcbpw-2019/","section":"Posts","summary":"Today I’m heading to the DC-Baltimore Perl Workshop. This conference has been on my list of wanting to go to for years. With DC-Baltimore hosting TPC two years ago, they didn’t have the workshop and then skipped last year but as soon as I heard they were hosting it again this year I jumped to attend.\nUnfortunately some of my friends that would normally make this event aren’t going to make it this year but I will catch up with them at TPC in Pittsburgh.\n","title":"Heading to DCBPW 2019","type":"posts"},{"content":"Tonight I gave a talk at Toronto Perl Mongers discussing docker-compose. This month\u0026rsquo;s topic started as a discussion at the end of my presentation last month, as I was using containers to demonstrate OpenAPI interaction.\nThere was a lot of attendee involvement in the during the presentation. Plenty of questions and discussions. The use of docker containers for MetaCPAN dominated the conversation and examples, as I had the ability to show how they worked and work with the containers running on my system.\nVideo for the presentation is here, and the slides with all my notes are available here\nDuring the discussion after the formal end of the talk, I discuss a talk that I\u0026rsquo;d watched from the goto; conference on containers. The talk is by Liz Rice titled Containers from Scratch, where she codes a container environment from scratch in golang on stage. This talk solidified what containers are and how they work for me. I highly recommend it.\n","date":"March 28, 2019","externalUrl":null,"permalink":"/posts/2019/03/docker-compose-explained/","section":"Posts","summary":"Tonight I gave a talk at Toronto Perl Mongers discussing docker-compose. This month’s topic started as a discussion at the end of my presentation last month, as I was using containers to demonstrate OpenAPI interaction.\nThere was a lot of attendee involvement in the during the presentation. Plenty of questions and discussions. The use of docker containers for MetaCPAN dominated the conversation and examples, as I had the ability to show how they worked and work with the containers running on my system.\n","title":"Docker Compose Explained","type":"posts"},{"content":"","date":"March 28, 2019","externalUrl":null,"permalink":"/tags/docker-compose/","section":"Tags","summary":"","title":"Docker-Compose","type":"tags"},{"content":"","date":"March 3, 2019","externalUrl":null,"permalink":"/tags/git/","section":"Tags","summary":"","title":"Git","type":"tags"},{"content":"Recently at work there\u0026rsquo;s been a number of discussions involving implementing git and what workflow to use, and how they work.\nGitFlow\nI\u0026rsquo;ve always defined the workflow we tend to follow as GitFlow-esque. Somewhat like GitFlow in philosophy but not following as strictly. One of the opponents of adopting GitFlow cited the following post GitFlow considered harmful which describes where GitFlow can go wrong, or be overly complex. Interestingly those are the bits that we don\u0026rsquo;t adopt which makes up the \u0026ldquo;esque\u0026rdquo;. Before now though I hadn\u0026rsquo;t read of a proper description of the workflow we usually follow, luckily an update made in 2017 describes OneFlow workflow.\n","date":"March 3, 2019","externalUrl":null,"permalink":"/posts/2019/03/git-workflows/","section":"Posts","summary":"Recently at work there’s been a number of discussions involving implementing git and what workflow to use, and how they work.\nGitFlow\nI’ve always defined the workflow we tend to follow as GitFlow-esque. Somewhat like GitFlow in philosophy but not following as strictly. One of the opponents of adopting GitFlow cited the following post GitFlow considered harmful which describes where GitFlow can go wrong, or be overly complex. Interestingly those are the bits that we don’t adopt which makes up the “esque”. Before now though I hadn’t read of a proper description of the workflow we usually follow, luckily an update made in 2017 describes OneFlow workflow.\n","title":"Git Workflows","type":"posts"},{"content":"","date":"March 3, 2019","externalUrl":null,"permalink":"/categories/programming/","section":"Categories","summary":"","title":"Programming","type":"categories"},{"content":"","date":"March 3, 2019","externalUrl":null,"permalink":"/tags/workflow/","section":"Tags","summary":"","title":"Workflow","type":"tags"},{"content":"","date":"March 3, 2019","externalUrl":null,"permalink":"/categories/blogging/","section":"Categories","summary":"","title":"Blogging","type":"categories"},{"content":"I felt like a website change was in order. I wasn\u0026rsquo;t completely happy with the previous layout and scheme. Maybe it\u0026rsquo;s just Spring Cleaning on the mind. As such I\u0026rsquo;m moving from Jekyll to Hugo.\nThe idea is to have GitLab CI/CD automatically build the site when a new article is pushed to it. I have the plans as to how this will all work, but need to get to the implementation.\nDuring this migration process I\u0026rsquo;m going to clean up what\u0026rsquo;s been posted and do a better job of tagging and categorizing (at least that\u0026rsquo;s the hope).\nThis is very much a work in progress.\n","date":"March 3, 2019","externalUrl":null,"permalink":"/posts/2019/03/moving-to-hugo/","section":"Posts","summary":"I felt like a website change was in order. I wasn’t completely happy with the previous layout and scheme. Maybe it’s just Spring Cleaning on the mind. As such I’m moving from Jekyll to Hugo.\nThe idea is to have GitLab CI/CD automatically build the site when a new article is pushed to it. I have the plans as to how this will all work, but need to get to the implementation.\n","title":"Moving to Hugo","type":"posts"},{"content":"","date":"February 28, 2019","externalUrl":null,"permalink":"/tags/api/","section":"Tags","summary":"","title":"API","type":"tags"},{"content":"Tonight I gave a talk at Toronto Perl Mongers discussing MetaCPAN, Mojolicious, and OpenAPI this is a result of the work that was done at meta::hack 3 this past November.\nThe idea of the talk is to take the work that was done and provide examples and greater detail as to how to implement a project using OpenAPI and Mojolicious.\nThe talk was very well received, and as I was running MetaCPAN in a number of docker containers for demonstration purposes and interesting conversation on containers started.\nVideo for the presentation is here, and the slides with all my notes are available here\n","date":"February 28, 2019","externalUrl":null,"permalink":"/posts/2019/02/metacpan-mojolicious-and-openapi/","section":"Posts","summary":"Tonight I gave a talk at Toronto Perl Mongers discussing MetaCPAN, Mojolicious, and OpenAPI this is a result of the work that was done at meta::hack 3 this past November.\nThe idea of the talk is to take the work that was done and provide examples and greater detail as to how to implement a project using OpenAPI and Mojolicious.\nThe talk was very well received, and as I was running MetaCPAN in a number of docker containers for demonstration purposes and interesting conversation on containers started.\n","title":"MetaCPAN Mojolicious and OpenAPI","type":"posts"},{"content":"","date":"February 28, 2019","externalUrl":null,"permalink":"/tags/mojolicious/","section":"Tags","summary":"","title":"Mojolicious","type":"tags"},{"content":"","date":"February 28, 2019","externalUrl":null,"permalink":"/tags/openapi/","section":"Tags","summary":"","title":"OpenAPI","type":"tags"},{"content":"","date":"February 28, 2019","externalUrl":null,"permalink":"/tags/perl/","section":"Tags","summary":"","title":"Perl","type":"tags"},{"content":"Building off the work that done at this years meta::hack, I participated in Mojolicious Advent Calendar with an article on MetaCPAN, Mojolicious, and OpenAPI.\nThis article documents the steps that were taken to document and implement the MetaCPAN\u0026rsquo;s Search API using OpenAPI. There was a lot more involved in the process and a number of endpoints were documented, however that would have made the article super huge.\n","date":"December 7, 2018","externalUrl":null,"permalink":"/posts/2018/12/metacpan-mojolicious-and-openapi-advent/","section":"Posts","summary":"Building off the work that done at this years meta::hack, I participated in Mojolicious Advent Calendar with an article on MetaCPAN, Mojolicious, and OpenAPI.\nThis article documents the steps that were taken to document and implement the MetaCPAN’s Search API using OpenAPI. There was a lot more involved in the process and a number of endpoints were documented, however that would have made the article super huge.\n","title":"Metacpan, Mojolicious, and OpenAPI Advent","type":"posts"},{"content":"Olaf has created a really great summary of the work that was done during meta::hack this year. It was a lot of fun, and a lot of interesting work was done. Including some proverbial \u0026ldquo;Friday night at 5pm deployments\u0026rdquo; (actual time was Sunday immediately before boarding a plane).\nmeta::hack 3 Wrap Report\n","date":"November 21, 2018","externalUrl":null,"permalink":"/posts/2018/11/metahack3-followup/","section":"Posts","summary":"Olaf has created a really great summary of the work that was done during meta::hack this year. It was a lot of fun, and a lot of interesting work was done. Including some proverbial “Friday night at 5pm deployments” (actual time was Sunday immediately before boarding a plane).\nmeta::hack 3 Wrap Report\n","title":"meta::hack3 Followup","type":"posts"},{"content":"I\u0026rsquo;ve been invited to attend meta::hack 3! I\u0026rsquo;m very excited for this oppurtinity to give back to the community, work on a site that I use so much every day, and work with some really great people.\nSome background on the event and who is going is provider here by Olaf meta::hack is back!.\nI\u0026rsquo;d like to say thank you to the 2018 sponsors of the event Booking.com and ServerCentral\n","date":"November 8, 2018","externalUrl":null,"permalink":"/posts/2018/11/metahack-3/","section":"Posts","summary":"I’ve been invited to attend meta::hack 3! I’m very excited for this oppurtinity to give back to the community, work on a site that I use so much every day, and work with some really great people.\nSome background on the event and who is going is provider here by Olaf meta::hack is back!.\nI’d like to say thank you to the 2018 sponsors of the event Booking.com and ServerCentral\n","title":"meta::hack 3","type":"posts"},{"content":"","date":"April 11, 2018","externalUrl":null,"permalink":"/tags/ansible/","section":"Tags","summary":"","title":"Ansible","type":"tags"},{"content":"By default Ubuntu/Debian docker images do not include python as part of the distribution. Before running Ansible tasks against containers, including fact gathering, python must be installed.\nThe value of ansible_os_family can not be used because it\u0026rsquo;s not available until after facts have been gathered.\n- hosts: docker-containers gather_facts: False pre_tasks: - name: Check for apt (Debian family) raw: \u0026#34;test -e /usr/bin/apt\u0026#34; register: apt_installed ignore_errors: true - name: Install python for Ansible raw: \u0026#34;[ -e /usr/bin/python ] || (apt -y update \u0026amp;\u0026amp; apt install -y python-minimal)\u0026#34; register: output when: apt_installed and apt_installed.rc == 0 changed_when: output.stdout != \u0026#34;\u0026#34; Answering a couple questions that the above might cause:\nQ: Why not use the Ansible stat module to test if /usr/bin/apt is installed? A: It uses python.\nQ: Why are errors ignored? A: A test failure is an error, and Ansible will stop processing if /usr/bin/apt doesn\u0026rsquo;t exist.\n","date":"April 11, 2018","externalUrl":null,"permalink":"/posts/2018/04/ansible-docker-and-ubuntu/","section":"Posts","summary":"By default Ubuntu/Debian docker images do not include python as part of the distribution. Before running Ansible tasks against containers, including fact gathering, python must be installed.\nThe value of ansible_os_family can not be used because it’s not available until after facts have been gathered.\n- hosts: docker-containers gather_facts: False pre_tasks: - name: Check for apt (Debian family) raw: \"test -e /usr/bin/apt\" register: apt_installed ignore_errors: true - name: Install python for Ansible raw: \"[ -e /usr/bin/python ] || (apt -y update \u0026\u0026 apt install -y python-minimal)\" register: output when: apt_installed and apt_installed.rc == 0 changed_when: output.stdout != \"\" Answering a couple questions that the above might cause:\n","title":"Ansible, Docker, and Ubuntu","type":"posts"},{"content":"","date":"April 11, 2018","externalUrl":null,"permalink":"/tags/ubuntu/","section":"Tags","summary":"","title":"Ubuntu","type":"tags"},{"content":"Hi. My name is Shawn Sorichetti, and I like to write code.\nFor almost 3 years I\u0026rsquo;ve been working as a Senior Software Architect with a consulting agency, providing custom development, building infrastructure, migrating tools, and automating data extraction. Solutions typically involve PostgreSQL, Docker, Prometheus, Zabbix, Perl and JavaScript.\nPreviously I was working with a SaaS company providing Data Analytics to Telecommunication companies. In the nine years I\u0026rsquo;ve spent here, my major responsibility has been, and still is, the development and maintenance of the company\u0026rsquo;s primary application. The existing version is based on stored procedures specific to the Data Warehouse, and retained and executed within the RDBMS. The new version however, designed and developed by myself, is a Java application that that uses template files to dynamically generate SQL statements specific to the RDBMS and execute them based on the customer data model.\nAs a part of development and maintenance of the legacy and replacement applications, I’ve lead projects including customer implementations, performance optimization with RDBMS vendors (Oracle, Netezza, Teradata), and provided UNIX support and performance optimizations to both co-workers and customers. I\u0026rsquo;ve also written many utilities, including web service to RDBMS interfaces, a dynamic table generator, and universal data exporter.\nPrior to my joining the company, version control and testing were afterthoughts. I helped to implement these practices and make them the standard rather than the exception.\nFor the 12 years previous to this position, I worked in IBM as a Tools and Automation specialist. This involved system performance, availability and compliance management, as well as backup and recovery, including disaster planning and tests. Customers were highly visible, required high availablity, and included major retailers, banks and Government agencies. I participated, lead and resolved major incidents and outages, be they networking, power outages, or system failures.\nI worked with local, cross country and international teams to build data centers, implement applications, ensure security compliance, and investigate and resolve problems and incidents. I designed and developed many applications including: automating user registration, billing, monitoring, system reporting, and testing.\nPrimarily my work has been in Perl, Java and SQL. I have also worked with JavaScript, Ruby, C#/.NET, NoSQL and dabbled with Groovy, Go and Elm.\nIn any organization I\u0026rsquo;ve been known as the \u0026ldquo;Go to guy\u0026rdquo;, the \u0026ldquo;one to get it done and done right\u0026rdquo;, and the \u0026ldquo;guy with the answers\u0026rdquo;.\nThank you.\n~Shawn\nhttps://ssoriche.com/resume.html\n","externalUrl":null,"permalink":"/coverletter/","section":"Shawn Sorichetti","summary":"Hi. My name is Shawn Sorichetti, and I like to write code.\nFor almost 3 years I’ve been working as a Senior Software Architect with a consulting agency, providing custom development, building infrastructure, migrating tools, and automating data extraction. Solutions typically involve PostgreSQL, Docker, Prometheus, Zabbix, Perl and JavaScript.\nPreviously I was working with a SaaS company providing Data Analytics to Telecommunication companies. In the nine years I’ve spent here, my major responsibility has been, and still is, the development and maintenance of the company’s primary application. The existing version is based on stored procedures specific to the Data Warehouse, and retained and executed within the RDBMS. The new version however, designed and developed by myself, is a Java application that that uses template files to dynamically generate SQL statements specific to the RDBMS and execute them based on the customer data model.\n","title":"Cover letter","type":"page"},{"content":"Another day of back-to-back meetings, punctuated by my puppy’s sudden decision that my laptop is a chew toy. 🐾 The only thing more predictable than my schedule is the timing of her “I am the most important thing in this room” moments.\n","externalUrl":null,"permalink":"/now/","section":"Shawn Sorichetti","summary":"Another day of back-to-back meetings, punctuated by my puppy’s sudden decision that my laptop is a chew toy. 🐾 The only thing more predictable than my schedule is the timing of her “I am the most important thing in this room” moments.\n","title":"Now","type":"page"},{"content":"A mix of open source contributions, infrastructure tooling, and personal utilities. Active projects only — archived work has been removed.\nOpen Source Contributions # MetaCPAN # MetaCPAN is the open source search engine for CPAN, the Comprehensive Perl Archive Network. It provides a web interface and API used by the Perl community worldwide.\nI have been a long-term contributor to the MetaCPAN project, leading the migration of the entire infrastructure from Docker Compose to Kubernetes. The metacpan-k8s repository contains the Kubernetes manifests and configuration that now runs the production environment. Prior to that I worked on the OpenAPI implementation for the MetaCPAN API.\nKubernetes and Infrastructure # kubectl-consolidation # A kubectl plugin that surfaces Karpenter consolidation blockers for nodes in a cluster. When Karpenter cannot consolidate nodes, the reason is often buried in logs or annotations. This plugin makes the blockers visible at a glance.\nTerraform Providers # terraform-provider-kanidm # A Terraform provider for Kanidm, a modern identity management system. Supports managing users, groups, OAuth2 clients, and OIDC configuration via Terraform. Useful for teams running self-hosted identity infrastructure.\nterraform-provider-soft-serve # A Terraform provider for Soft Serve, Charm\u0026rsquo;s self-hosted git server. Manages repositories, access controls, and settings over SSH using Terraform.\nGo CLI Tools # git-reword # Reword git commit messages without shell-quoting headaches. Useful when amending or rewording commits that contain characters that confuse the shell.\ngit-track # Add or remove remote branch references from git configuration. Simplifies tracking remote branches in repositories with complex remote setups.\nzenlog # No more tee-ing. zenlog captures both stdout and stderr to a log file while still displaying output in the terminal, without requiring pipes.\nFish Shell # git.fish # A collection of fish shell functions to make git less painful. Focused on reducing keystrokes for common git workflows.\nkubectl.fish # Fish shell functions for working with Kubernetes. Wraps common kubectl operations to reduce repetition when navigating clusters and namespaces.\nfishies.fish # A collection of useful fish scripts that cover automation tasks without a more specific home.\nObsidian Plugins # obsidian-slack-emoji # An Obsidian plugin that renders Slack-style emoji shortcodes (:tada:, :rocket:, etc.) inline in notes. Useful when moving content between Slack and Obsidian.\nobsidian-backup-git # An Obsidian plugin for automatic git-based backup without remote sync. Commits vault changes on a schedule, preserving local history without requiring a remote repository.\nSystem Configuration # dotfiles # Configuration for git, NeoVim, tmux, and the fish shell. Maintained as a living record of how my development environment is set up.\n","externalUrl":null,"permalink":"/projects/","section":"Shawn Sorichetti","summary":"A mix of open source contributions, infrastructure tooling, and personal utilities. Active projects only — archived work has been removed.\nOpen Source Contributions # MetaCPAN # MetaCPAN is the open source search engine for CPAN, the Comprehensive Perl Archive Network. It provides a web interface and API used by the Perl community worldwide.\n","title":"Projects","type":"page"},{"content":" Shawn Sorichetti # Whitby, Ontario, Canada\nshawn@sorichetti.org\nhttps://ssoriche.com/coverletter\nTechnical Skills # With more than twenty years experience as a systems architect, administrator \u0026amp; developer. My responsibilities have included the design and management of geographically diverse installations of varying scale. I have proposed and seen to completion a wide variety of infrastructure projects including the monitoring of systems and networks, data backup and recovery, statistics gathering and reporting and automation. I am often called upon for my ability to provide immediate resolution during mission critical outages and then to perform the subsequent root cause analysis. Regardless of my role, team member, technical lead or project lead, I always have a strong sense of responsibility to deliver projects on time.\nProgramming: Go, Python, Perl, Shell DevOps: Kubernetes, Terraform, Docker, Jenkins, Ansible, Prometheus, GitLab, GitHub Database: PostgreSQL, MySQL/MariaDB/Galera, MongoDB, InfluxDB, Oracle, Teradata, Netezza, SQLServer, Operating Systems: Linux, FreeBSD, macOS, Solaris, HPUX, AIX. Monitoring: Prometheus/Grafana, Netsaint/Nagios/Zabbix, NetView, SNMP, Networking: TCP/IP, Apache/nginx, VPN, Firewall, DNS, SMTP (postfix, exim). Employment History # Manager II Engineering - Platform Runtime Services # ZipRecruiter - August 2023 to Present\nGrew and developed team of 4 engineers, successfully promoting 3 team members through focused mentorship, clear growth paths, and high-impact project opportunities Established thrice-weekly stand-ups adopted across the organization and bi-weekly cross-functional one-on-ones to gather requirements, predict needs, and align on organizational priorities Manage high-performing team of 4 delivering high-visibility platform initiatives that impact 200+ engineers across the organization Led team migration from Logz.io to Grafana Loki, reducing observability costs while improving log aggregation performance for development and production environments Directed implementation of internal developer platform using Backstage IDP and Crossplane for infrastructure provisioning, enabling self-service developer workflows and reducing environment setup time Oversaw team migration to ArgoCD for GitOps-based continuous deployment, improving deployment reliability and providing developers with better visibility into release status Senior Software Engineer II - Site Reliability Engineer # ZipRecruiter - July 2019 to August 2023\nMigrate from Cluster Autoscaler to Karpenter for Kubernetes cluster scaling lead developer in migration from docker/docker-shim to containerd in kubernetes clusters develop and deploy Kubernetes upgrades including upgrades and migrations of etcd, and etcd-manager, kOps, cert-manager and other cluster tooling. definition and implementation of dynamic developer environments running in Kubernetes Optimization of application image build process Creation of internal CPAN cache for application builds daily operations and maintenance of Kubernetes environments Senior Software Architect # Colored Blocks - October 2016 to July 2019\nOn Assignment: All Around the World\nParticipate in migration and building of GitLab CI/CD containerized workflow build automation and testing Application caching architecture and implementation using HA Redis clustering Review and recommend PostgreSQL architecture and performance improvements Create and deploy database change management design based on Sqitch Maintain, deploy and augment Ansible distribution of servers and applications, including a Vagrant and Docker based test cluster Development and deployment of CI strategy with Jenkins and Docker via Ansible Development of application deployment strategies Develop authentication mechanism involving Perl, Catalyst, LDAP, DBIx::Class and Galera. Create and deploy timeseries data collection with InfluxDB solution Develop application integration with Kafka Update, migrate and develop HA solution for PostgreSQL Data extraction from Oracle with ELT logic ending in PostgreSQL Database Migration and upgrade of MongoDB from existing hardware to new AngularJS 1.2 to 1.64 migration, including rewriting controllers to components On Assignment: SiteSuite WebSite Design\nSolution and migrate PostgreSQL 9.2 servers to 10.3 in new data centre with minimal downtime Create and deploy PostgreSQL connection pooling via pgpool to provide HA, and reduce application connections to database servers Design and deploy database backup and recovery system Create Ansible deployment and maintenance strategy for data centre migration Senior Software Engineer # Scorecard Systems Inc - October 2007 to September 2016\nTelecommunication Data Analytic Software.\nAutomate application system build/deployment using a combination of Vagrant, Subversion, and custom applications. Create dynamic data model driven version of the ETL application. Develop interface applications between database processes and web services. Automate column datatype changes throughout application, based on source tables. Maintain and enhance existing SQL, PL/SQL, T-SQL and Teradata ETL application. Performance optimization, data testing, customer implementations. Tools and Automation Technical Team Lead/Lead Software Engineer # IBM Global Services - February 2000 to October 2007\ne-Business Hosting Center. Provides managed hosting and co-location services.\nresponsible for 7 developers and 3 tools support personnel located across the country. Automated product testing and release based on CPAN, Smolder and Subversion. Automated UNIX/Windows user management and password changes. Proactive monitoring using custom web based dashboard solution for servers, switches, storage utilization, security compliance, patch management. Backup, Recovery, Disaster Planning and automated monthly billing. Build 5 data centres across Canada, including monitoring, backup and recovery, DNS, SMTP Relays. Represent Tools Team and servers in quarterly departmental and yearly corporate security audits. UNIX System Administrator # IBM Global Services - October 1999 to February 2000\nManaged e-Business Services. Deployment and maintenance of UNIX servers on the Internet to host customer\u0026rsquo;s applications.\nUNIX System Administration supporting both hardware and software. Automate UNIX system build process. Y2K system recovery process design and testing. Implement corporate security guidelines. Participate as Systems Aministrator in quarterly departmental security audit. Solution Architect/Automation Software Engineer # IBM Global Services - November 1998 to October 1999\nProjects and Consulting Services. Provide solution design and consulting services to external customers and internal projects.\nBackup/Recovery architecture for IBM Canada and customers. Automate User ID synchronization tool for AIX System Adminstration team. Develop Monitoring solution for SNMP enabled printers and servers. Provide support to AIX System Administration team on security. Investigate Linux as a viable desktop and server environment. AIX Systems Administrator \u0026amp; Technical Team Lead/Software Engineer # IBM Global Services - January 1997 to November 1998\nRS/6000 Systems Management Services. Provide AIX System Administration, Tivoli and ADSM support.\nCreate a web based user registration and management platform for ADSM users. AIX System Administration hardware and software builds. Design and implementation of monitoring services for 20 data centres. Architect and implement country wide backup and recovery services for workstations and servers. responsible for a team of 4 support people and developers Implement corporate security guidelines and participate in corporate security audit. Education History # December 2001:\nRed Hat Linux Certified Engineer (Red Hat) May 1996 to 2007:\nIBM Education \u0026amp; Training:\nTCP/IP Architecture AIX System Administration Project Management Fundamentals Advanced Presentations Skills TSM/ADSM Advanced Concepts Tivoli TME10 Administration NetView Administration April 1996:\nCentennial College - Computer Programming Diploma (2 Year Program) ","externalUrl":null,"permalink":"/resume-full/","section":"Shawn Sorichetti","summary":"Shawn Sorichetti # Whitby, Ontario, Canada\nshawn@sorichetti.org\nhttps://ssoriche.com/coverletter\nTechnical Skills # With more than twenty years experience as a systems architect, administrator \u0026 developer. My responsibilities have included the design and management of geographically diverse installations of varying scale. I have proposed and seen to completion a wide variety of infrastructure projects including the monitoring of systems and networks, data backup and recovery, statistics gathering and reporting and automation. I am often called upon for my ability to provide immediate resolution during mission critical outages and then to perform the subsequent root cause analysis. Regardless of my role, team member, technical lead or project lead, I always have a strong sense of responsibility to deliver projects on time.\n","title":"Resume","type":"page"},{"content":" Senior Software Engineer/Manager - Toronto, CAD # With more than twenty years experience as a systems architect, administrator \u0026amp; developer. My responsibilities have included the design and management of geographically diverse installations of varying scale. I have proposed and seen to completion a wide variety of infrastructure projects including the monitoring of systems and networks, data backup and recovery, statistics gathering and reporting and automation. I am often called upon for my ability to provide immediate resolution during mission critical outages and then to perform the subsequent root cause analysis. Regardless of my role, team member, technical lead or project lead, I always have a strong sense of responsibility to deliver projects on time.\nProgramming: Go, Python, Perl, Shell DevOps: Kubernetes, Terraform, Docker, Jenkins, Ansible, Prometheus, GitLab, GitHub Database: PostgreSQL, MySQL/MariaDB/Galera, MongoDB, InfluxDB, Oracle, Teradata, Netezza, SQLServer, Operating Systems: Linux, FreeBSD, macOS, Solaris, HPUX, AIX. Monitoring: Prometheus/Grafana, Netsaint/Nagios/Zabbix, NetView, SNMP, Networking: TCP/IP, Apache/nginx, VPN, Firewall, DNS, SMTP (postfix, exim). Experience # Manager II Engineering - Platform Runtime Services # ZipRecruiter - August 2023 to Present\nGrew and developed team of 4 engineers, successfully promoting 3 team members through focused mentorship, clear growth paths, and high-impact project opportunities Established thrice-weekly stand-ups adopted across the organization and bi-weekly cross-functional one-on-ones to gather requirements, predict needs, and align on organizational priorities Manage high-performing team of 4 delivering high-visibility platform initiatives that impact 200+ engineers across the organization Led team migration from Logz.io to Grafana Loki, reducing observability costs while improving log aggregation performance for development and production environments Directed implementation of internal developer platform using Backstage IDP and Crossplane for infrastructure provisioning, enabling self-service developer workflows and reducing environment setup time Oversaw team migration to ArgoCD for GitOps-based continuous deployment, improving deployment reliability and providing developers with better visibility into release status Senior Software Engineer II - Site Reliability Engineer # ZipRecruiter - July 2019 to August 2023\nMigrate from Cluster Autoscaler to Karpenter for Kubernetes cluster scaling lead developer in migration from docker/docker-shim to containerd in kubernetes clusters develop and deploy Kubernetes upgrades including upgrades and migrations of etcd, and etcd-manager, kOps, cert-manager and other cluster tooling. definition and implementation of dynamic developer environments running in Kubernetes Optimization of application image build process Creation of internal CPAN cache for application builds daily operations and maintenance of Kubernetes environments Senior Software Architect # Colored Blocks - October 2016 to July 2019\nOn Assignment: All Around the World\nParticipate in migration and building of GitLab CI/CD containerized workflow build automation and testing Application caching architecture and implementation using HA Redis clustering Review and recommend PostgreSQL architecture and performance improvements Create and deploy database change management design based on Sqitch Maintain, deploy and augment Ansible distribution of servers and applications, including a Vagrant and Docker based test cluster Development and deployment of CI strategy with Jenkins and Docker via Ansible Development of application deployment strategies Develop authentication mechanism involving Perl, Catalyst, LDAP, DBIx::Class and Galera. Create and deploy timeseries data collection with InfluxDB solution Develop application integration with Kafka Update, migrate and develop HA solution for PostgreSQL Data extraction from Oracle with ELT logic ending in PostgreSQL Database Migration and upgrade of MongoDB from existing hardware to new AngularJS 1.2 to 1.64 migration, including rewriting controllers to components On Assignment: SiteSuite WebSite Design\nSolution and migrate PostgreSQL 9.2 servers to 10.3 in new data centre with minimal downtime Create and deploy PostgreSQL connection pooling via pgpool to provide HA, and reduce application connections to database servers Design and deploy database backup and recovery system Create Ansible deployment and maintenance strategy for data centre migration My work history before 2016 is available upon request. # ","externalUrl":null,"permalink":"/resume/","section":"Shawn Sorichetti","summary":"Senior Software Engineer/Manager - Toronto, CAD # With more than twenty years experience as a systems architect, administrator \u0026 developer. My responsibilities have included the design and management of geographically diverse installations of varying scale. I have proposed and seen to completion a wide variety of infrastructure projects including the monitoring of systems and networks, data backup and recovery, statistics gathering and reporting and automation. I am often called upon for my ability to provide immediate resolution during mission critical outages and then to perform the subsequent root cause analysis. Regardless of my role, team member, technical lead or project lead, I always have a strong sense of responsibility to deliver projects on time.\n","title":"Shawn Sorichetti","type":"page"},{"content":"A collection of slash pages — short, single-purpose pages about me.\n/now — What I\u0026rsquo;m currently up to /starbucks — My coffee order /resume — Work history and skills /projects — Things I\u0026rsquo;ve built or contributed to /talks — Conference and meetup presentations ","externalUrl":null,"permalink":"/slash/","section":"Shawn Sorichetti","summary":"A collection of slash pages — short, single-purpose pages about me.\n/now — What I’m currently up to /starbucks — My coffee order /resume — Work history and skills /projects — Things I’ve built or contributed to /talks — Conference and meetup presentations ","title":"Slash Pages","type":"page"},{"content":"Brown Sugar Oat Americano\n","externalUrl":null,"permalink":"/starbucks/","section":"Shawn Sorichetti","summary":"Brown Sugar Oat Americano\n","title":"Starbucks Order","type":"page"},{"content":" TPCiP 2019 Wrap Up # Recently returned from The Perl Conference in Pittsburgh. This talk is a summary of the talks attended.\nPresented at:\nToronto Perl Mongers June 27, 2019 slides video\nFixup and Autosquash # Everyone makes mistakes while writing code. Correcting the mistakes should not impact the story of the code. This presentation is a brief overview of the git commit --fixup and git rebase -i --autosquash commands.\nPresented at:\nDC Baltimore Perl Workshop April 6, 2019 slides\nDocker Compose Explained # A quick walk through the advantages of using docker-compose over standalone docker commands, and how to convert a docker run command into a docker-compose.yaml file.\nThe presentation garnered a lot of discussion with the attendees which is available through the video.\nPresented at:\nToronto Perl Mongers February 28, 2019 slides video\nMetaCPAN, Mojolicious, and OpenAPI # Taking an existing established API and converting it to the OpenAPI standard. This talk is an introduction to OpenAPI and the methodologies and reasonings used when adding it to MetaCPAN.\nPresented at:\nToronto Perl Mongers March 28, 2019 slides video\nDivorcing System # This talk discusses the use of Carton and plenv to install application expected versions of perl and modules that doesn\u0026rsquo;t require root access nor replace the system installed version.\nPresented at:\nToronto Perl Mongers June 28, 2018 slides video\nTesting with PostgreSQL # Using the Test::PostgreSQL, App::Sqitch, DBIx::Class, and DBIx::ClassEasyFixture perl modules to build a testing environment that is dynamically created and destroyed with each execution. Test can be run both serial or parallel without impacting each other.\nIncludes code examples to introduce creating the environment.\nNOTE This is a code heavy talk, and there are \u0026ldquo;eye charts\u0026rdquo;, reviewing the included source code is recommended.\nPresented at:\nToronto Perl Mongers October 26, 2017 slides source code sorry, no video.\n","externalUrl":null,"permalink":"/talks/","section":"Shawn Sorichetti","summary":"TPCiP 2019 Wrap Up # Recently returned from The Perl Conference in Pittsburgh. This talk is a summary of the talks attended.\nPresented at:\nToronto Perl Mongers June 27, 2019 slides video\nFixup and Autosquash # Everyone makes mistakes while writing code. Correcting the mistakes should not impact the story of the code. This presentation is a brief overview of the git commit --fixup and git rebase -i --autosquash commands.\n","title":"Talks","type":"page"},{"content":"","externalUrl":null,"permalink":"/til/","section":"TIL","summary":"","title":"TIL","type":"til"}]