The Tor Project is censoring discussion about killing off Tor v2

on blog at

Tor is pretty great. It provides a free pseudoanonymous proxy to the clear web, and more importantly, it provides a secure name to address service in the form of .onion domains. For the last 15 years tor's version 2 services have been heavily used. But, they're getting a bit long in the tooth and the prefix of the hash of the private key that allows you to control a .onion domain is starting to become feasible for a powerful/rich actor to brute force. And there are security issues re: DoS types that v2 relaying allows.

So naturally the tor project introduced a new tor version, an incompatible one, called version 3. This one won't have issues with brute forcing for a long time since it has a much longer prefix of the hash used and a stronger hash function. Tor v3 was announced in 2017 and generally usable by 2018 or so with depreciation in 2021 announced in 2019.

Depreciation... that usually means, it's suggested not to use it. But that's not what the tor project meant. What they meant is that in Oct 2021 they were going to completely remove all code and support for version 2 .onion resolution from all official tor software. The clients, the relays, everything. Version 2 was to be completely killed off. For Tor, who value security above all else, there was no other option. And so, 15 years or .onion community interlinking, bookmarks, search indices, and communities just disappeared. Sure, some created a new v3 .onion domain and encouraged their users to switch. But the vast majority of .onions did not create new v3.onions to replace them. In fact, they mostly still exist and are still accessible because the relays out there still support v2. And they will until the tor project releases a new version with a consensus flag to block old versions that support v2.

So, version 2 relays are still used. The majority of sites are v2 (despite someone spamming v3 onions right after their creation to make 6 times the number of v2 onions and make it seem like v3 was getting use, v2 traffic is still more). So now people are updating their tor client software, trying to go to a tor onion website, and instead getting a error #0xf6, a generic error saying it's not a valid onion domain.

These users come to #tor on OFTC and ask why they cannot access the tor onion website. And... they won't get an answer beyond "<ggus> S17: probably you're trying to visit a v2 onion site (16 chars). the v2 support was fully removed in Tor Browser 11.". ggus has further declared to me personally that any talk about tor v2 beyond linking to the depreciation blog post will result in a ban. That's right. The tor project IRC chat is censoring discussion directly relevant to tor. Tor censoring. Laughable if the consequences weren't so dire.

They claim that no one uses v2 and that's a lie. They actively try to hide the reasons why users cannot access real tor sites. They're attacking their own userbase. And all in the name of security. Tor v2 doesn't neeed to die. It isn't even dead now, it's still very active. There need not be a single answer to, "Is tor v2 still okay to use?". That's a personal question and top-down forcing and then censorship is definitely the wrong way to address the issue.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


Blender sucks for 3D printing and can't do boolean ops

on blog at

I finally got my 3D printer working properly which meant it was finally time to learn some "real" software to do the 3D design instead of fiddling with SketchUp trying to get surface based modeling to create properly manifolded and printable items. I need to do boolean operations to 3D meshes on a linux desktop so I figured I'd try Blender.

Learning blender was slow going. It took me about a week before I could feel confident in being able to make the shapes I wanted to. So finally I got down to doing boolean operations on those shapes... and Blender can't do it. I couldn't believe it but even doing boolean differences with simple unedited meshes of a sphere and a cube will fail in almost all non-trivial cases. Differences will add, differences will create non-manifold creations, and sometimes differences won't even work at all and just create a hash of jagged edges and overlapping surfaces tied to important wall surfaces. And that's with simple geometic primative meshes. Trying to do a boolean subtract of a complex shape like a fishing lure jig from a simple cube will definitely always fail.


I spent a couple days on this trying boolean ops with simple geometries following youtube tutorials before I gave up and asked for help on #blender on irc.libera.chat. It turns out Blender is just shit at boolean operations and the problem wasn't me. So... I just wasted a week learning a program that can't do what I want. And when I threw my hands in the air and decided to just bend over and try Autodesk Fusion 360 instead I couldn't even get Firefox 88 to be able to download the installer because Autodesk uses such bleeding edge javascript/webcomponents un-HTML even a 6 month old browser can't run it.

I guess for now I'll go back to faking things with SketchUp. I'm just shocked that software with such a good reputation is so fundementally broken.

---------------

Disregard that, I was just using a 1 year old version.

It turns out that the LTS of Blender 2.8 that ships with Debian 11 *is* terrible and can't do boolean operations. But the Blender 2.9 binaries *can* and they do it perfectly for simple geometric primatives. It still can't do complex meshes but that's probably more on my mesh errors than anything with Blender.

---------------

Much learning later...

There are a lot of very finicky unpredictable tricks to gettingboolean operations to work cleanly. But the subtractions and additiosn are are mostly possible even if it takes 5 tries with lots of hacky work arounds each time. So I made and printed a type of "snap" and gliding jig that immitates dead minnows ramming into the bottom. These are super expensive right now, $12-15 if available at all. It pains me to lose them in snags. Losing 2 cheap 3/8th jig heads and some terminal tackle is much less of a problem.






[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


If I set sars-cov-2 medical policy: intranasal vaccination booster and mandated real N95 masks

on blog at

If I were suddenly and absurdly given control of pandemic medical policy at any scale I would implement two things which I believe would put *ending* the pandemic back on the table.

The first would be a "warp speed" like program to fund both existing and new intranasal sars-cov-2 vaccines that actually prevent transmission of the virus and mucosal infection. The only feasible way to end the pandemic is to stop the spread of the disease. Intramuscular vaccination alone to prevent hospitlization and hospital overload isn't enough. It is just the first step. An intranasal booster after intramuscular vaccination could stop the spread.

There are only 7 intranasal sars-cov-2 vaccinations undergoing early phase 1/2 trials right now and of them only 2 are using a sane design: one live attenuated sars-cov-2, one protein fragment of the receptor binding domain. I hope one of the two manages to clear phase 3 and be manufactured. Otherwise the only option may be ordering peptides and assembling the community designed RaDVaC sars-cov-2 intranasal vaccine at a price point of $5k for a couple dozen doses.

"the ideal vaccination strategy may use an intramuscular vaccine to elicit a long-lived systemic IgG response and a broad repertoire of central memory B and T cells, followed by an intranasal booster that recruits memory B and T cells to the nasal passages and further guides their differentiation toward mucosal protection, including IgA secretion and tissue-resident memory cells in the respiratory tract." - https://science.sciencemag.org/content/373/6553/397

The second and much less important would be both short and long term funding of N95 mask factory production and mandating N95 or better masks in all public indoor spaces. Procedure masks and unfit cloth masks do not provide protection from the wearer or to the wearer against aerosol spread respiratory viruses. They protect against spittle. That's all. Current masks laws basically only require face coverings indoors and this does nothing with giant centimeter^2 gaps through which air and aerosols flow. Their N rating is N30 to N40. Most aerosols just go around the mask. Critical to this would be a public messaging campaign nuanced enough to acknowledge that yes, prior "masks" and "mask" laws actually don't work just like the idiot anti-maskers said. But it is because most masks aren't actually masks against aerosols, not because aerosol masks don't work.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


CHIME Radio Telescope publishes new list of 500+ fast radio bursts

on blog at

CHIME has been my favorite radio telescope since before they even started looking for fast radio bursts. I really like the idea of using line focus from parabolic cylinders. It's obviously the most economical way to get a huge collection area/aperture. I've heard that reflections down the long axis are a problem for some CHIME uses but apparently for detecting fast radio bursts it's not. They just published The First CHIME/FRB Fast Radio Burst Catalog on arxiv and a summary article on Nature.com was on the hackernews frontpage. It looks like they really will end up detecting a FRB per day.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


Antenna gain is confusing: size and directivity change each other

on blog at

I've been trying to smooth out my understanding of antenna gain for years. To me it seems like aperture determines how much energy you receive regardless of multiplicative weightings provided by the directivity*efficiency gain. You can only be directive and efficient with current distributions you actually intersect with. But the more I try to tease directivity and aperture apart the more I learn they're completely coupled. You can't have changed aperture without changed directivity. But can you have changed directivity without changed aperture?

The sky pattern of a radiator is determined by the fourier transform (video) of the current distribution (video). The longer the current distribution (relative to wavelength) the tighter the pattern on the sky (higher directivity). So every time you increase the size of your antenna aperture, say by making a longer horn, a bigger dish, or adding another element to a collinear array, by increasing the current path length the summed pattern on the sky becomes smaller in angle. It seems like you can't change aperture without changing directivity.

But can you change directivity without changing aperture? I thought about adding discrete elements like inductors or caps serially into the antenna element to force excitation of different modes (say 1/2 or 3/4 instead of 1/4 for a monopole). That would drastically change the pattern... but would any feasible arrangement be feasible that didn't change the physical length of the element and so it's effective aperture? Or, even if the physical aperture would remain the same, wouldn't the forced current distribution change the *effective* aperture? Even if a mode forcing and pattern changing experiment could be feasibly built there would be severely reduced efficiency gain, entangling another variable in the mix.

And then there's the scenario where you just have many antennas, each omnidirectional and sampled by discrete receivers but later aligned algorithmically and added in the digital domain. The antenna response pattern would still be omnidirectional in that case so the directivity would not change but the aperture would change significantly (doubling in area).

Antenna gain is confusing. It seems that bigger antennas always mean smaller patterns and smaller patterns always mean bigger antennas. But I'm still not sure it *always* applies like some physical law.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


Homegrown baby grape leaf and vine tendril salad

on blog at

The wild grape (river grape) plants I grow for privacy and to graft table grapes onto were getting a bit overgrown and too close to the outdoor lights this spring. I decided to make a nice salad with the tender baby grape leaves and sour vine tendrils from the vines I cut down. It turned out absolutely delicious with some bacon, cucumber, cheese, crutons, and balsamic dressing.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


Spawning Bluegill Nest Grid

on blog at

I went fishing recently and came upon this patch of Bluegill (panfish) nest beds all packed in together along a strech of shore about 20 times the width of this photo (each bed is about 1ft across). We noticed it initially because a largemouth bass we were targeting kept returning to the area despite being spooked.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


What is a static website?

on blog at

I thought I knew: A static website is a site where the webserver serves literal .html files written in HTML where the page does not need any execution of a programming language to display perfectly. I know that CSS is turing complete these days but it is mostly of trivial consequence so far. But apparently modern web dev vernacular has shifted from the "static" focusing on user experience and instead is qualifed by the dev side experience. If the contents of the HTML files are ever changed by execution of a programming language, like say, a shell script that parses a logfile, then it is no longer a static site even if the webserver only serves static .html and media files.

By that definition this wouldn't be a static website. I'm obviously biased but I don't think the modern web dev definition is very useful except in clearing up confusion with those that do.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?

Comments:
7:51:35, Fri Sep 16, 2022 : /blog/2021-05-27-1.html/, Do you premoderate comments
<superkuh> 11:47:08, Wed Sep 16, 2022 : /blog/2021-05-27-1.html/, As you found out, nope. But comments are not automatically added to the blog pages that are commented on. They are only automatically added here in the general comment page. So in a sense in-line comments displayed on the blog posts themselves are pre-moderated.

The Freenode split: I side with Libera

on blog at

The last week has been rough. I've been on Freenode since 2001, 20 years, and I've had a registered nick since 2004. I consider it my home. Perhaps even more than I consider my current appartment my home. The people, the culture, the *help*, and the community were amazing. It was a refuge from the constant turmoil of life elsewhere in physicality and, eventually, over the rest of the internet. So it was really upsetting when I first heard that Freenode was splitting on the 16th. I spent almost *all* of my free time from then to now talking to Andrew Lee and various staff 1-to-1 getting their sides, reading the press releases each put out, and watching what happened on Freenode and Libera. It was, and is, painful. I kept hoping for reconsilliation but at this point I don't think it's possible anymore.

As far as I can tell it started when the holding company Freenode LLC was set up. IRC nerds aren't the best at legal bullshit so help from rasengan was brought in and he ended up on the board. Later christel wanted out for undetermined reasons and sold the freenode llc holding company to rasengan. At that time there was a lot of anxiety on freenode (I was there) over the new corporate ownership; rasengan/Andrew Lee has a lot of other for-profit businesses. But we, and staff, were assured that he was just doing this because he loved IRC (which I still believe) and that he'd stay out of server operations.

At this point rasengan owned the holding company that owned the domain name. Everything else, the servers, the DNS control accounts, etc were owned and operated by staff. That includes setting up the relationships for third parties to donate servers to freenode. With christel's departure the staff got together and decided to vote tomaw as the new freenode leader to handle server operations. Things were okay for a while.

Then there was a hiccup with rasengan putting a little ad/link for one of his for-profit companies on the freenode website. That acted as the catalyst for tension and tomaw asked for full control of the domain name. Things became more tense when, after some time of freenode staff contributing to the dev of an updated IRCd they made a post about it on the blog about switching to it. This was a problem for rasengan since his overarching goal with IRC.com and ownership of IRC networks (like snoonet running on IRC.com resources) is to set up a truly distributed IRC where any server can peer to any other and easily switch networks. He'd put in a significant amount of money into developing this IRC.com IRCd, I've heard. And this provided his motivation to opposing the staff's switch to their modified IRCd for future operations.

This was now rasengan interering directly in the operations of the freenode network. And the heated debates this caused eventually lead to litigation by rasengan against tomaw. At this point it was obvious that christel/rasengan's statements about the sale were just words and that now legal means were going to be used to take control of the operation of the servers.

They drafted their various resignation letters, some got leaked early on the 16th. At this point I got involved as a regular user on #freenode and talked to rasengan there and on Hackernews forums. I also talked to the staff. Even then I personally hoped for reconciliation. But apparently it wasn't possible. Legally, Freenode LLC (if not actual freenode, the people and servers) was owned by rasengan. So the staff decided to resign in mass.

Based on rasengan's behavior, and the type of new staff he appointed, I think that libera represents the ideals and people that make up Freenode far more than Freenode itself does anymore. I'm trying to move but it's going to take a long time to let everyone know what's going on. Most, reasonably, don't care about network drama.

Anyway, I look forward to seeing you all on irc.libera.chat going forwards.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


Why don't we need containers to run bash scripts?

on blog at

Most language that rapidly add new features gets those features used and that causes the need for containerization on any machine with a release older than $shorttimeperiod. The problem is not old code forwards but code from today back to whatever your distro had at release. With fast moving languages like Rust, Python, and moreso these days even C++ with it's extension you'll eventually find your system repos don't have the required version to turn arbitrary new source code into a running program. It could be a pyenv or a container (flatpack, snap, appimage, docker, etc) but something has to bring in *and contain* the unsatisified deps and provide them.

But I've never had to do that for a Bash or any shell script. They just run even on decades old installs. I assumed this was because bash wasn't getting new features and was just bug fixes and maintainence at this point. But I was wrong. Bash does get new features all the time. Features that older bash versions cannot run.

<phogg> rangergord: bash changes *all the time*. It just doesn't break backwards compatibility very much.
<superkuh> It does?
<phogg> superkuh: was that intended for me?
<superkuh> Yeah, I'm genuinely surprised. I thought it was just bug fixes and such these days.
<phogg> superkuh: New features with every release, too.
<hexnewbie> superkuh: People simply don't follow the Twitter feed with new features to use them, so you're far less often surprised. Come to think of it, the amount of cmd1 `cmd2` I see suggests even less up-to-date coders :P
<superkuh> Well, that destroys my argument.
<phogg> superkuh: the POSIX shell command language changes, too, although much more slowly--and absolutely without breaking things.
<hexnewbie> In Python everyone's *dying* to have that feature *yesterday* (that includes me)
<phogg> superkuh: see https://lists.gnu.org/archive/html/bug-bash/2019-01/msg00063.html and https://lists.gnu.org/archive/html/info-gnu/2020-12/msg00003.html for the two latest bash releases.

So why don't we have bash script compatibility problems? I don't know. None of my guesses are based on very much information. I will again just assume that most people, most devs, that work in bash don't care about the latest and greatest. They care about having their script run on as many machines as possible as smoothly as possible. I've been thinking about language/lib future shock in the wrong way. It's not the rapidity of the language that causes it. It's the culture of the devs using it.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?

Comments:
1:14:21, Tue Aug 17, 2021 : /blog/2021-05-07-1.html/, for more than a decade, bash is the shell i relied on. not only do i have it installed, its one of the few things ive compiled from source. i never was much into more limited shells, i never was much into newer fancier ones, i just used bash like a new standard. now that ksh comes with openbsd, i find it is more predictable and less tedious. the reason? fewer features. fewer gotchas in what a string does. some of those substitutions are useful, dont get me wrong, but the stability of ksh is what makes me appreciate it. i am tired of massaging code to work in bash, it has too many rules. and thats after years of tryig to rely on it. simplicity wins here. obviously dos was even simpler, but it did next to nothing and that wasnt enough.

Tor is killing off all v2 domains on October 15th, 2021

on blog at

For the last 15 years or so I've made sure to put all my websites on both the clear web and tor. I liked tor because I believed that I owned my domain name on tor. This is unlike the clear web with DNS and registrars merely leasing you a domain name. But it turns out that even on tor you don't own your domain name.

https://blog.torproject.org/v2-deprecation-timeline

Today I learned that the Tor project is killing off *all* tor v2 domains in October 2021, a handful of months from now. All of the tor web, the links between sites, the search engine indices, the rankings and reputations for onion domains, they will all disappear in a puff of smoke. I never really owned my tor domain. I owned my keys but The Tor Project owns the domains. And the Tor Project has decided to take my domain away from me.

Yes, I understand why tor v2 is depreciated. The hash of the keys is short enough that brute forcing a prefix to imitiate some v2 address is nearly possible. But v2 has worked alongside v3 just fine for a couple years now. The idea that they have to completely remove it is false.

And the consequences of doing so are dramatic. The very heart of the tor web will be destroyed. All the links, the search indices and rankings, the domain reputations and bookmarks will all disapear. Some of the domains may create a new website using tor v3 but it will have no link back to the v2 version. The web of tor sites will simply disappear. Decades of community building gone in an instant.

I thought tor was useful for owning my domain but I was wrong. I no longer see any reason to run tor onion services and I will not be creating a v3 service like I'd been planning to. I guess nows the time to try i2p.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?

Comments:
1:13:08, Sat May 1, 2021 : /blog/blog.html/, It's not just the key length, there are lots of other vulnerabilities that make v2 onions fundamentally unsound and insecure. It isn't just about you, the host it's also about protecting the identities of the people who installed Tor to browse the internet safely and anonymously. Personally, I don't think that search rankings and reputations are much of a concern since onions are almost exclusively discovered by word-of-mouth, onion-location headers, and webrings. The deprecation period has been very long, any actively updated onion site should have added a v3 link a long time ago. Lastly, of course it's not your domain you're joining a network of volunteer-run servers with a consensus - you can feel free to run your own network if you want complete control of the domain, but it's going to be awfully lonely. Anyway, I hope you change your mind and decide to keep a v3 site, but I'm interested in hearing more about i2p as well
10:47:29, Sun Nov 27, 2022 : /blog/2021-04-30-1.html/, lol

Guerilla gardening on a new dry lake bed

on blog at

I decided to take the opportunity to get in a little guerilla gardening on the now dry empty lake bed. The city plans to start seeding of fast growing grasses to reduce erosion later this year. But there'll be a brief 1-2 year window before any actual landscaping with native plants is done. I figure I can get a single season of growth in without too much disturbance. And if any of the trees I try seeding (oak, maple, birch, thornless honey locust, walnut) get going maybe the city will keep them.

This is what the lake used to look like.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


My Cree LED bulb LED popped off it's phosphor and emits UV.

on blog at

Sometime between last night and this morning my Cree 4-Flow style LED bulb had the phosphor coating pop off one of it's LED emitters. It is now shining a bright violet light along with the warm white of the others. I don't think there's any danger from this but it isn't very nice asthetically.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


Rust isn't stable and shouldn't be used for things that need stability.

on blog at

Everything is being written or re-written in Rust these days. It has genuinely nice features and it's hip. Unfortunately it is the least stable compiler/language in existence. Rust's "stable" versions don't even last a year. Something written in Rust 1.50 can't be compiled and run on even Debian 11, a distro *not even released yet*, with Rust 1.48 from 2020-11-19. Rust versions don't even last 5 months.

For all it's safety and beauty as a compiler it fails spectacularly at being able to compile. This is why every single Rust tutorial you see demands you not install rustc from your system repos (no matter how new) and instead use their proprietary and shady rust-up binary that pulls down whatever it wants from their online sources. The idea of a compiler that you cannot get from your repos is absurd and that needs to be recognized.

Rust is cool, yeah, but it isn't a real language yet. It's just a plaything for coders right now. It is not a tool for making applications that can be run by other people. Eventually as a different demographic of devs begins using Rust they'll stop using the fancy newest backwards incompatible extensions and only target "stable" versions. And at that point Rust will slowly become a real language. But that day is not today.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


2010 era CPU idle at similar or lower wattage than 2020 era CPU

on blog at

Some claim old computers should be replaced with modern computers because modern processors have far lower idle wattages. A mid-range Intel Core2Duo 6400 from the 2008 era idles at 22w. Compare that to modern processors like my mid-range Ryzen 3600 at 21w.

A better argument for replacing older PCs is the last decade of lowering prices for 80% gold and higher efficiency power supplies that can actually delivery this efficiency at idle loads.

But any efficient modern power supply should be able to power an older computer too so this doesn't change much.

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?


Some very specific complaints about modern PC hardware and the linux desktop

on blog at

I recently built a new PC for the first time in a decade. I'm struggling with a number of surprising and distasteful changes to computing practices on both the hardware and the software sides. This is my list of modern computing annoyances and the partial mitigations so far.

1. Stock coolers sold with $200 CPU are no longer capable of cooling those processors and aftermarket cooling is required. My Ryzen 3600 went to 95C and throttled within 20 seconds with stock cooling. Conversely the stock cooler with an Intel 3570k from a decade ago not only kept everything below 80C but it did it even with a massive overclock. Buying a $45 aftermarket cooler fixed this.

2. It's easy to forget that even if you build a fancy new computer if you use an *old* BIOS video card you *CAN NOT BOOT IN EFI MODE*. And while I like BIOS-mode more than EFI mode at some point in the future I want to actually buy a modern video card. It may be when I get a modern card I'll have to resize my partitions to make an EFI boot one, somehow switch my OS boot process to EFI while still using the old videocard (impossible to confirm success because it won't display on EFI boot) and then put in the modern EFI video card. Additionally, with being forced into csm/BIOS mode for my boot that means I now have a GPT formatted and partitioned drive with a MBR master boot record that's actually what is being used for booting. I wasn't even aware that was possible before doing it.

3. All desktop environments have suffered from "convergence" with mobile design styles and slowly bitrot. Gtk3 File Open dialogs no longer have text entry forms for filename and paths by default. This prevents pasting into a just opened open dialog until you press ctrl-l. And the gsettings switch for this no longer functions. Gtk devs confirmed this is as intended and gtk3filechooserwidget will stay this way. My attempts to create a patch so far are feeble at best and only restore the filename-entry location mode for the first File->Open operation of each application load.

I've been poking at gtkfilechooserwidget.c and .ui for about 2 weeks now. Of course the first thing I tried was just changing the initialization function settings for location mode. But it turned out operation mode had to be changed too (thanks Random!). But that only fixes for the first file->open operation. In order to permanently fix it I think the best path forward is to artificially send the GtkFileChooserWidget::location-popup: signal on initialization of pretty much any function that looks like it works on location mode or operation mode. I've tried doing this using location_popup_handler (impl, NULL); but I haven't fixed it yet.

Index: gtk+-3.24.5/gtk/gtkfilechooserwidget.c
===================================================================
--- gtk+-3.24.5.orig/gtk/gtkfilechooserwidget.c
+++ gtk+-3.24.5/gtk/gtkfilechooserwidget.c
@@ -8607,8 +8607,8 @@ gtk_file_chooser_widget_init (GtkFileCho
   priv->load_state = LOAD_EMPTY;
   priv->reload_state = RELOAD_EMPTY;
   priv->pending_select_files = NULL;
-  priv->location_mode = LOCATION_MODE_PATH_BAR;
-  priv->operation_mode = OPERATION_MODE_BROWSE;
+  priv->location_mode = LOCATION_MODE_FILENAME_ENTRY;
+  priv->operation_mode = OPERATION_MODE_ENTER_LOCATION;
   priv->sort_column = MODEL_COL_NAME;
   priv->sort_order = GTK_SORT_ASCENDING;
   priv->recent_manager = gtk_recent_manager_get_default ();

4. Just because systemd Debian 10 has cron/crontab and syslog don't expect them to actually work. syslog won't update it's time when the time is changed and so cron won't either. Additionally, crontab -e no longer updates cron immediately. Instead all changes are updated at the start of the next minute. This means if you try to test a crontab -e entry you need to set it for at least *2* minutes into the future, not just one.

root      2997  0.0  0.0   8504  2872 ?        Ss   Apr03   0:00 /usr/sbin/cron -f
Apr  4 10:32:11 janus crontab[5608]: (superkuh) BEGIN EDIT (superkuh)
Apr  4 10:32:15 janus crontab[5608]: (superkuh) REPLACE (superkuh)
Apr  4 10:32:15 janus crontab[5608]: (superkuh) END EDIT (superkuh)
Apr  4 10:33:01 janus cron[2997]: (superkuh) RELOAD (crontabs/superkuh)

[comment on this post] Append "/@say/your message here" to the URL in the location bar and hit enter.

[webmention/pingback] Did you respond to this post? What's the URL?