profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/frioux/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
fREW Schmidt frioux ZipRecruiter Santa Monica, CA https://blog.afoolishmanifesto.com Station

bobtfish/catalystx-simplelogin 27

Simple login controller and template bundle for Catalyst

arcanez/SQL-Translator 16

SQL::Translator

bobtfish/catalyst-action-rest 14

The Catalyst::Action::REST Distribution

bobtfish/catalyst-actionrole-acl 7

Apply ACLs to Catalyst actions

frioux/blog 7

My blog

frioux/app-adenosine-prefab 4

Batteries Included - this is just a deps included version of http://github.com/frioux/app-adenosine, go there for pull reqs and bugs

push eventfrioux/leatherman

dependabot[bot]

commit sha 67b550b784ba4b449ff3ddbfc46d67e252628e45

build(deps): bump modernc.org/sqlite from 1.11.2 to 1.12.0 Bumps [modernc.org/sqlite](https://gitlab.com/cznic/sqlite) from 1.11.2 to 1.12.0. - [Release notes](https://gitlab.com/cznic/sqlite/tags) - [Commits](https://gitlab.com/cznic/sqlite/compare/v1.11.2...v1.12.0) --- updated-dependencies: - dependency-name: modernc.org/sqlite dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com>

view details

push time in 18 hours

PR merged frioux/leatherman

build(deps): bump modernc.org/sqlite from 1.11.2 to 1.12.0 dependencies

Bumps modernc.org/sqlite from 1.11.2 to 1.12.0. <details> <summary>Commits</summary> <ul> <li><a href="https://gitlab.com/cznic/sqlite/commit/d5a408ea46bdb01b9ba331eaf920199512868030"><code>d5a408e</code></a> regenerate darwin/arm64</li> <li><a href="https://gitlab.com/cznic/sqlite/commit/80c708f2aa4ed4c9bd96eb6c4c165e161da81adb"><code>80c708f</code></a> fix race on conn.{Close,interrupt}, updates <a href="https://gitlab.com/cznic/sqlite/issues/57">#57</a></li> <li><a href="https://gitlab.com/cznic/sqlite/commit/cb1f916bce972f7974efee9a12fabb87a5590d29"><code>cb1f916</code></a> enable session support, updates <a href="https://gitlab.com/cznic/sqlite/issues/58">#58</a></li> <li>See full diff in <a href="https://gitlab.com/cznic/sqlite/compare/v1.11.2...v1.12.0">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

</details>

+11 -7

0 comment

2 changed files

dependabot[bot]

pr closed time in 18 hours

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

@bradfitz yep, works!

$ ping pi400
PING pi400.frioux.gmail.com.beta.tailscale.net (100.118.162.57) 56(84) bytes of data.
64 bytes from pi400.frioux.gmail.com.beta.tailscale.net (100.118.162.57): icmp_seq=1 ttl=64 time=362 ms
64 bytes from pi400.frioux.gmail.com.beta.tailscale.net (100.118.162.57): icmp_seq=2 ttl=64 time=6.49 ms
^C
--- pi400.frioux.gmail.com.beta.tailscale.net ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 6.494/184.345/362.196/177.851 ms
caliburn 💀 🕡  ~/Downloads 
¢ systemctl suspend
caliburn 💀 🕡  ~/Downloads 
$ ping pi400       
PING pi400.frioux.gmail.com.beta.tailscale.net (100.118.162.57) 56(84) bytes of data.
64 bytes from pi400.frioux.gmail.com.beta.tailscale.net (100.118.162.57): icmp_seq=1 ttl=64 time=10.5 ms
64 bytes from pi400.frioux.gmail.com.beta.tailscale.net (100.118.162.57): icmp_seq=2 ttl=64 time=33.2 ms
^C
--- pi400.frioux.gmail.com.beta.tailscale.net ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 10.539/21.861/33.184/11.322 ms
caliburn 💀 🕡  ~/Downloads 
$ tailscale version 
1.11.160
  tailscale commit: 281d5036261c77fddf54d89479044493b8075547
  other commit: 229c9a366f4951f2d3e3a62d1c3f324a6d20cfdf
  go version: go1.16.6-ts6fa85e8
frioux

comment created time in a day

pull request commenttailscale/tailscale

wgengine: re-set DNS config on Linux after a major link change

Ah I didn't realize tailscale down was different from stopping the service. Thanks. This fixed it!

bradfitz

comment created time in 2 days

pull request commenttailscale/tailscale

wgengine: re-set DNS config on Linux after a major link change

I installed locally built versions (see this:)

$ tailscale version 
date.20210603
  go version: go1.16.5
$ tailscaled -version
date.20210603
  go version: go1.16.5

But when I try to start it I get this:

$ sudo tailscale up
2021/07/26 08:38:57 GotNotify: Version mismatch! frontend="date.20210603" backend="1.11.137-ta5fb8e073-gfad17047b"
backend error: GotNotify: Version mismatch! frontend="date.20210603" backend="1.11.137-ta5fb8e073-gfad17047b"
bradfitz

comment created time in 2 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

haha, well, stopping NetworkManager drops those settings:

caliburn 💀 🕥  ~C/tailscale «main» 
¢ sudo tailscale down && sudo tailscale up
caliburn 💀 🕥  ~C/tailscale «main» 
$ sudo resolvectl                         
Global
       LLMNR setting: no                  
MulticastDNS setting: no                  
  DNSOverTLS setting: no                  
      DNSSEC setting: no                  
    DNSSEC supported: no                  
          DNSSEC NTA: 10.in-addr.arpa     
                      16.172.in-addr.arpa 
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa 
                      18.172.in-addr.arpa 
                      19.172.in-addr.arpa 
                      20.172.in-addr.arpa 
                      21.172.in-addr.arpa 
                      22.172.in-addr.arpa 
                      23.172.in-addr.arpa 
                      24.172.in-addr.arpa 
                      25.172.in-addr.arpa 
                      26.172.in-addr.arpa 
                      27.172.in-addr.arpa 
                      28.172.in-addr.arpa 
                      29.172.in-addr.arpa 
                      30.172.in-addr.arpa 
                      31.172.in-addr.arpa 
                      corp                
                      d.f.ip6.arpa        
                      home                
                      internal            
                      intranet            
                      lan                 
                      local               
                      private             
                      test                

Link 7 (tailscale0)
      Current Scopes: DNS                                
DefaultRoute setting: no                                 
       LLMNR setting: no                                 
MulticastDNS setting: no                                 
  DNSOverTLS setting: no                                 
      DNSSEC setting: no                                 
    DNSSEC supported: no                                 
  Current DNS Server: 100.100.100.100                    
         DNS Servers: 100.100.100.100                    
          DNS Domain: frioux.gmail.com.beta.tailscale.net
                      frioux.gmail.com                   
                      ~0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa  
                      ~100.100.in-addr.arpa              
                      ~101.100.in-addr.arpa              
                      ~102.100.in-addr.arpa              
                      ~103.100.in-addr.arpa              
                      ~104.100.in-addr.arpa              
                      ~105.100.in-addr.arpa              
                      ~106.100.in-addr.arpa              
                      ~107.100.in-addr.arpa              
                      ~108.100.in-addr.arpa              
                      ~109.100.in-addr.arpa              
                      ~110.100.in-addr.arpa              
                      ~111.100.in-addr.arpa              
                      ~112.100.in-addr.arpa              
                      ~113.100.in-addr.arpa              
                      ~114.100.in-addr.arpa              
                      ~115.100.in-addr.arpa              
                      ~116.100.in-addr.arpa              
                      ~117.100.in-addr.arpa              
                      ~118.100.in-addr.arpa              
                      ~119.100.in-addr.arpa              
                      ~120.100.in-addr.arpa              
                      ~121.100.in-addr.arpa              
                      ~122.100.in-addr.arpa              
                      ~123.100.in-addr.arpa              
                      ~124.100.in-addr.arpa              
                      ~125.100.in-addr.arpa              
                      ~126.100.in-addr.arpa              
                      ~127.100.in-addr.arpa              
                      ~64.100.in-addr.arpa               
                      ~65.100.in-addr.arpa               
                      ~66.100.in-addr.arpa               
                      ~67.100.in-addr.arpa               
                      ~68.100.in-addr.arpa               
caliburn 💀 🕥  ~C/tailscale «main» 
$ sudo systemctl stop NetworkManager
caliburn 💀 🕥  ~C/tailscale «main» 
$ sudo resolvectl                   
Global
       LLMNR setting: no                  
MulticastDNS setting: no                  
  DNSOverTLS setting: no                  
      DNSSEC setting: no                  
    DNSSEC supported: no                  
          DNSSEC NTA: 10.in-addr.arpa     
                      16.172.in-addr.arpa 
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa 
                      18.172.in-addr.arpa 
                      19.172.in-addr.arpa 
                      20.172.in-addr.arpa 
                      21.172.in-addr.arpa 
                      22.172.in-addr.arpa 
                      23.172.in-addr.arpa 
                      24.172.in-addr.arpa 
                      25.172.in-addr.arpa 
                      26.172.in-addr.arpa 
                      27.172.in-addr.arpa 
                      28.172.in-addr.arpa 
                      29.172.in-addr.arpa 
                      30.172.in-addr.arpa 
                      31.172.in-addr.arpa 
                      corp                
                      d.f.ip6.arpa        
                      home                
                      internal            
                      intranet            
                      lan                 
                      local               
                      private             
                      test                

Link 7 (tailscale0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  

Link 5 (docker0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  

Link 3 (wlp4s0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 

frioux

comment created time in 4 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

Summary:

  • stopped nm
  • resolvectl did have tailscale0
  • uptime from resolved 1 day 13h
  • did the dance
  • resolvectl still had tailscale0
  • uptime from resolved still 1 day 13h

Full transcript in case I can't read:

caliburn 💀 🕤  ~C/tailscale «main» 
$ sudo systemctl stop NetworkManager
[sudo] password for frew: 
caliburn 💀 🕤  ~C/tailscale «main» 
$ sudo resolvectl
Global
       LLMNR setting: no                  
MulticastDNS setting: no                  
  DNSOverTLS setting: no                  
      DNSSEC setting: no                  
    DNSSEC supported: no                  
          DNSSEC NTA: 10.in-addr.arpa     
                      16.172.in-addr.arpa 
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa 
                      18.172.in-addr.arpa 
                      19.172.in-addr.arpa 
                      20.172.in-addr.arpa 
Global
       LLMNR setting: no                  
MulticastDNS setting: no                  
  DNSOverTLS setting: no                  
      DNSSEC setting: no                  
    DNSSEC supported: no                  
          DNSSEC NTA: 10.in-addr.arpa     
                      16.172.in-addr.arpa 
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa 
                      18.172.in-addr.arpa 
                      19.172.in-addr.arpa 
                      20.172.in-addr.arpa 
                      21.172.in-addr.arpa 
                      22.172.in-addr.arpa 
                      23.172.in-addr.arpa 
                      24.172.in-addr.arpa 
                      25.172.in-addr.arpa 
                      26.172.in-addr.arpa 
                      27.172.in-addr.arpa 
                      28.172.in-addr.arpa 
                      29.172.in-addr.arpa 
                      30.172.in-addr.arpa 
                      31.172.in-addr.arpa 
                      corp                
                      d.f.ip6.arpa        
                      home                
                      internal            
                      intranet            
                      lan                 
                      local               
                      private             
                      test                

Link 7 (tailscale0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: no  
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  

Link 5 (docker0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  

Link 3 (wlp4s0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  

Link 2 (enp0s31f6)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  
caliburn 💀 🕤  ~C/tailscale «main» 
$ sudo systemctl status systemd-resolved
● systemd-resolved.service - Network Name Resolution
     Loaded: loaded (/lib/systemd/system/systemd-resolved.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2021-07-22 08:02:56 PDT; 1 day 13h ago
       Docs: man:systemd-resolved.service(8)
             https://www.freedesktop.org/wiki/Software/systemd/resolved
             https://www.freedesktop.org/wiki/Software/systemd/writing-network-configuration-managers
             https://www.freedesktop.org/wiki/Software/systemd/writing-resolver-clients
   Main PID: 250244 (systemd-resolve)
     Status: "Processing requests..."
      Tasks: 1 (limit: 18791)
     Memory: 4.7M
     CGroup: /system.slice/systemd-resolved.service
             └─250244 /lib/systemd/systemd-resolved

Jul 23 10:34:14 caliburn systemd-resolved[250244]: Using degraded feature set (UDP) for DNS server 100.100.100.100.
Jul 23 17:50:31 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 17:50:32 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 17:50:32 caliburn systemd-resolved[250244]: Using degraded feature set (UDP) for DNS server 100.100.100.100.
Jul 23 18:19:01 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 18:19:03 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 18:19:06 caliburn systemd-resolved[250244]: Using degraded feature set (UDP) for DNS server 100.100.100.100.
Jul 23 21:26:51 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 21:26:53 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 21:26:58 caliburn systemd-resolved[250244]: Using degraded feature set (UDP) for DNS server 100.100.100.100.
caliburn 💀 🕤  ~C/tailscale «main» 
$ sudo resolvectl
Global
       LLMNR setting: no                  
MulticastDNS setting: no                  
  DNSOverTLS setting: no                  
      DNSSEC setting: no                  
    DNSSEC supported: no                  
          DNSSEC NTA: 10.in-addr.arpa     
                      16.172.in-addr.arpa 
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa 
                      18.172.in-addr.arpa 
                      19.172.in-addr.arpa 
                      20.172.in-addr.arpa 
                      21.172.in-addr.arpa 
                      22.172.in-addr.arpa 
                      23.172.in-addr.arpa 
                      24.172.in-addr.arpa 
                      25.172.in-addr.arpa 
                      26.172.in-addr.arpa 
                      27.172.in-addr.arpa 
                      28.172.in-addr.arpa 
                      29.172.in-addr.arpa 
                      30.172.in-addr.arpa 
                      31.172.in-addr.arpa 
                      corp                
                      d.f.ip6.arpa        
                      home                
                      internal            
                      intranet            
                      lan                 
                      local               
                      private             
                      test                

Link 7 (tailscale0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: no  
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  

Link 5 (docker0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  

Link 3 (wlp4s0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
caliburn 💀 🕤  ~C/tailscale «main» 
$ sudo systemctl status systemd-resolved
● systemd-resolved.service - Network Name Resolution
     Loaded: loaded (/lib/systemd/system/systemd-resolved.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2021-07-22 08:02:56 PDT; 1 day 13h ago
       Docs: man:systemd-resolved.service(8)
             https://www.freedesktop.org/wiki/Software/systemd/resolved
             https://www.freedesktop.org/wiki/Software/systemd/writing-network-configuration-managers
             https://www.freedesktop.org/wiki/Software/systemd/writing-resolver-clients
   Main PID: 250244 (systemd-resolve)
     Status: "Processing requests..."
      Tasks: 1 (limit: 18791)
     Memory: 4.7M
     CGroup: /system.slice/systemd-resolved.service
             └─250244 /lib/systemd/systemd-resolved

Jul 23 10:34:14 caliburn systemd-resolved[250244]: Using degraded feature set (UDP) for DNS server 100.100.100.100.
Jul 23 17:50:31 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 17:50:32 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 17:50:32 caliburn systemd-resolved[250244]: Using degraded feature set (UDP) for DNS server 100.100.100.100.
Jul 23 18:19:01 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 18:19:03 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 18:19:06 caliburn systemd-resolved[250244]: Using degraded feature set (UDP) for DNS server 100.100.100.100.
Jul 23 21:26:51 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 21:26:53 caliburn systemd-resolved[250244]: Flushed all caches.
Jul 23 21:26:58 caliburn systemd-resolved[250244]: Using degraded feature set (UDP) for DNS server 100.100.100.100.
frioux

comment created time in 4 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

Ok, that did it, on the way

frioux

comment created time in 4 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

This log line makes me think you won't get the info you want, but I'll email what I got just in case:

Jul 23 17:50:11 caliburn NetworkManager[976954]: <warn>  [1627087811.8967] config: invalid logging configuration: Unknown log level 'ALL:DEBUG'
frioux

comment created time in 4 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

when working / before suspend: ``` Global LLMNR setting: no
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa 168.192.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test

Link 7 (tailscale0) Current Scopes: DNS
DefaultRoute setting: no
LLMNR setting: no
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Current DNS Server: 100.100.100.100
DNS Servers: 100.100.100.100
DNS Domain: frioux.gmail.com.beta.tailscale.net frioux.gmail.com
~0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa
~100.100.in-addr.arpa
~101.100.in-addr.arpa
~102.100.in-addr.arpa
~103.100.in-addr.arpa
~104.100.in-addr.arpa
~105.100.in-addr.arpa
~106.100.in-addr.arpa
~107.100.in-addr.arpa
~108.100.in-addr.arpa
~109.100.in-addr.arpa
~110.100.in-addr.arpa
~111.100.in-addr.arpa
~112.100.in-addr.arpa
~113.100.in-addr.arpa
~114.100.in-addr.arpa
~115.100.in-addr.arpa
~116.100.in-addr.arpa
~117.100.in-addr.arpa
~118.100.in-addr.arpa
~119.100.in-addr.arpa
~120.100.in-addr.arpa
~121.100.in-addr.arpa
~122.100.in-addr.arpa
~123.100.in-addr.arpa
~124.100.in-addr.arpa
~125.100.in-addr.arpa
~126.100.in-addr.arpa
~127.100.in-addr.arpa
~64.100.in-addr.arpa
~65.100.in-addr.arpa
~66.100.in-addr.arpa
~67.100.in-addr.arpa
~68.100.in-addr.arpa
~69.100.in-addr.arpa
~70.100.in-addr.arpa
~71.100.in-addr.arpa
~72.100.in-addr.arpa
~73.100.in-addr.arpa
~74.100.in-addr.arpa
~75.100.in-addr.arpa
~76.100.in-addr.arpa
~77.100.in-addr.arpa
~78.100.in-addr.arpa
~79.100.in-addr.arpa
~80.100.in-addr.arpa
~81.100.in-addr.arpa
~82.100.in-addr.arpa
~83.100.in-addr.arpa
~84.100.in-addr.arpa
~85.100.in-addr.arpa
~86.100.in-addr.arpa
~87.100.in-addr.arpa
~88.100.in-addr.arpa
~89.100.in-addr.arpa
~90.100.in-addr.arpa
~91.100.in-addr.arpa
~92.100.in-addr.arpa
~93.100.in-addr.arpa
~94.100.in-addr.arpa
~95.100.in-addr.arpa
~96.100.in-addr.arpa
~97.100.in-addr.arpa
~98.100.in-addr.arpa
~99.100.in-addr.arpa

Link 5 (docker0) Current Scopes: none DefaultRoute setting: no
LLMNR setting: yes MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no

Link 3 (wlp4s0) Current Scopes: DNS
DefaultRoute setting: yes
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Current DNS Server: 192.168.1.1 DNS Servers: 192.168.1.1 1.1.1.1
8.8.8.8
DNS Domain: ~.

Link 2 (enp0s31f6) Current Scopes: none DefaultRoute setting: no
LLMNR setting: yes MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no


When broken / after resume: ```
Global
       LLMNR setting: no                  
MulticastDNS setting: no                  
  DNSOverTLS setting: no                  
      DNSSEC setting: no                  
    DNSSEC supported: no                  
          DNSSEC NTA: 10.in-addr.arpa     
                      16.172.in-addr.arpa 
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa 
                      18.172.in-addr.arpa 
                      19.172.in-addr.arpa 
                      20.172.in-addr.arpa 
                      21.172.in-addr.arpa 
                      22.172.in-addr.arpa 
                      23.172.in-addr.arpa 
                      24.172.in-addr.arpa 
                      25.172.in-addr.arpa 
                      26.172.in-addr.arpa 
                      27.172.in-addr.arpa 
                      28.172.in-addr.arpa 
                      29.172.in-addr.arpa 
                      30.172.in-addr.arpa 
                      31.172.in-addr.arpa 
                      corp                
                      d.f.ip6.arpa        
                      home                
                      internal            
                      intranet            
                      lan                 
                      local               
                      private             
                      test                

Link 7 (tailscale0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  

Link 5 (docker0)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  

Link 3 (wlp4s0)
      Current Scopes: DNS        
DefaultRoute setting: yes        
       LLMNR setting: yes        
MulticastDNS setting: no         
  DNSOverTLS setting: no         
      DNSSEC setting: no         
    DNSSEC supported: no         
  Current DNS Server: 192.168.1.1
         DNS Servers: 192.168.1.1
                      1.1.1.1    
                      8.8.8.8    
          DNS Domain: ~.         

Link 2 (enp0s31f6)
      Current Scopes: none
DefaultRoute setting: no  
       LLMNR setting: yes 
MulticastDNS setting: no  
  DNSOverTLS setting: no  
      DNSSEC setting: no  
    DNSSEC supported: no  
frioux

comment created time in 4 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

Scheduled for Friday!

frioux

comment created time in 6 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

Sure! How do we arrange calendars?

frioux

comment created time in 6 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

No dice, still occurs.

frioux

comment created time in 6 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

As far as I know it never recovers without a reset, but I haven't paid super close attention

frioux

comment created time in 8 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

Not broken: BUG-5286a105daf3d974b6241f786c0fd2ca69341f9da4966d5454cbafc2ff8280c8-20210719154408Z-f8a8f096d6582d63

Broken: BUG-5286a105daf3d974b6241f786c0fd2ca69341f9da4966d5454cbafc2ff8280c8-20210719154507Z-ca659eb015629577

goroutine 152467 [running]:
tailscale.com/ipn/localapi.(*Handler).serveGoroutines(0xc000612980, 0xc481c0, 0xc0005b08c0, 0xc000560600)
	tailscale.com@v1.10.2/ipn/localapi/localapi.go:171 +0x7c
tailscale.com/ipn/localapi.(*Handler).ServeHTTP(0xc000612980, 0xc481c0, 0xc0005b08c0, 0xc000560600)
	tailscale.com@v1.10.2/ipn/localapi/localapi.go:90 +0x38b
tailscale.com/ipn/ipnserver.(*server).localhostHandler.func1(0xc481c0, 0xc0005b08c0, 0xc000560600)
	tailscale.com@v1.10.2/ipn/ipnserver/server.go:930 +0x145
net/http.HandlerFunc.ServeHTTP(0xc000070f00, 0xc481c0, 0xc0005b08c0, 0xc000560600)
	net/http/server.go:2049 +0x44
net/http.serverHandler.ServeHTTP(0xc0005b07e0, 0xc481c0, 0xc0005b08c0, 0xc000560600)
	net/http/server.go:2867 +0xa3
net/http.(*conn).serve(0xc0000a0dc0, 0xc48ba0, 0xc000612a00)
	net/http/server.go:1932 +0x8cd
created by net/http.(*Server).Serve
	net/http/server.go:2993 +0x39b

goroutine 1 [IO wait]:
internal/poll.runtime_pollWait(0x7f4d780da2f8, 0x72, 0x0)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc00024a218, 0x72, 0x0, 0x0, 0xb5dedc)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc00024a200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:507 +0x212
net.(*netFD).accept(0xc00024a200, 0xc0003fc400, 0x0, 0xc0004c19c8)
	net/fd_unix.go:172 +0x45
net.(*UnixListener).accept(0xc0004aa060, 0xc0004c1a00, 0xc0004c1a08, 0x30)
	net/unixsock_posix.go:162 +0x32
net.(*UnixListener).Accept(0xc0004aa060, 0x0, 0x0, 0xc48af8, 0xc0004b2080)
	net/unixsock.go:260 +0x65
tailscale.com/ipn/ipnserver.Run(0xc48af8, 0xc0004b2080, 0xc000030140, 0xc0000d0600, 0x40, 0xc00003e0a0, 0x7ffc7ab56e75, 0x1e, 0xa098, 0x7ffc7ab56e48, ...)
	tailscale.com@v1.10.2/ipn/ipnserver/server.go:711 +0x944
main.run(0x0, 0x0)
	tailscale.com@v1.10.2/cmd/tailscaled/tailscaled.go:309 +0x78d
main.main()
	tailscale.com@v1.10.2/cmd/tailscaled/tailscaled.go:163 +0x505

goroutine 5 [select]:
tailscale.com/logtail.(*Logger).drainBlock(0xc0000320a0, 0x0)
	tailscale.com@v1.10.2/logtail/logtail.go:186 +0x12f
tailscale.com/logtail.(*Logger).drainPending(0xc0000320a0, 0xc000094120, 0xc0000be200, 0x0)
	tailscale.com@v1.10.2/logtail/logtail.go:221 +0x3e5
tailscale.com/logtail.(*Logger).uploading(0xc0000320a0, 0xc48af8, 0xc0000be240)
	tailscale.com@v1.10.2/logtail/logtail.go:268 +0x1c5
created by tailscale.com/logtail.NewLogger
	tailscale.com@v1.10.2/logtail/logtail.go:98 +0x45c

goroutine 33 [syscall]:
syscall.Syscall(0x7, 0xc000527e90, 0x2, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	syscall/asm_linux_amd64.s:20 +0x5
golang.org/x/sys/unix.poll(0xc000527e90, 0x2, 0xffffffffffffffff, 0xc3d680, 0xc0000bc300, 0xb8f630)
	golang.org/x/sys@v0.0.0-20210616094352-59db8d763f22/unix/zsyscall_linux_amd64.go:725 +0x51
golang.org/x/sys/unix.Poll(0xc000527e90, 0x2, 0x2, 0xffffffffffffffff, 0xc000527ea8, 0x81baf1, 0xc3d680)
	golang.org/x/sys@v0.0.0-20210616094352-59db8d763f22/unix/syscall_linux_amd64.go:185 +0x8a
golang.zx2c4.com/wireguard/rwcancel.(*RWCancel).ReadyRead(0xc00000c030, 0xf522e0)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/rwcancel/rwcancel.go:51 +0x97
golang.zx2c4.com/wireguard/tun.(*NativeTun).routineNetlinkListener(0xc000194000)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/tun/tun_linux.go:134 +0x151
created by golang.zx2c4.com/wireguard/tun.CreateTUNFromFile
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/tun/tun_linux.go:480 +0x1d1

goroutine 34 [select]:
golang.zx2c4.com/wireguard/tun.(*NativeTun).routineHackListener(0xc000194000)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/tun/tun_linux.go:93 +0x1f8
created by golang.zx2c4.com/wireguard/tun.CreateTUNFromFile
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/tun/tun_linux.go:481 +0x1f3

goroutine 116 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00009ab40, 0xc000000002)
	runtime/sema.go:513 +0xf8
sync.(*Cond).Wait(0xc00009ab30)
	sync/cond.go:56 +0x99
net/http.(*http2pipe).Read(0xc00009ab28, 0xc0000343c0, 0x4, 0x4, 0x0, 0x0, 0x0)
	net/http/h2_bundle.go:3515 +0x97
net/http.http2transportResponseBody.Read(0xc00009ab00, 0xc0000343c0, 0x4, 0x4, 0x0, 0x0, 0x0)
	net/http/h2_bundle.go:8611 +0xaf
io.ReadAtLeast(0x7f4d77e345c0, 0xc00009ab00, 0xc0000343c0, 0x4, 0x4, 0x4, 0x24f422ca93, 0x0, 0x0)
	io/io.go:328 +0x87
io.ReadFull(...)
	io/io.go:347
tailscale.com/control/controlclient.(*Direct).sendMapRequest(0xc00000a1e0, 0xc48af8, 0xc000612e80, 0xffffffffffffffff, 0xc0004bdf10, 0x0, 0x0)
	tailscale.com@v1.10.2/control/controlclient/direct.go:756 +0x14dc
tailscale.com/control/controlclient.(*Direct).PollNetMap(...)
	tailscale.com@v1.10.2/control/controlclient/direct.go:572
tailscale.com/control/controlclient.(*Auto).mapRoutine(0xc00009c3c0)
	tailscale.com@v1.10.2/control/controlclient/auto.go:464 +0x571
created by tailscale.com/control/controlclient.(*Auto).Start
	tailscale.com@v1.10.2/control/controlclient/auto.go:151 +0x65

goroutine 25 [chan receive, 834 minutes]:
github.com/godbus/dbus/v5.newConn.func1(0xc00009c000)
	github.com/godbus/dbus/v5@v5.0.4/conn.go:274 +0x4b
created by github.com/godbus/dbus/v5.newConn
	github.com/godbus/dbus/v5@v5.0.4/conn.go:273 +0x13b

goroutine 26 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7f4d780da4c8, 0x72, 0x1000)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000112118, 0x72, 0x0, 0x10, 0xc0001c0020)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).ReadMsg(0xc000112100, 0xc0001fcc40, 0x10, 0x10, 0xc0001c0020, 0x1000, 0x1000, 0x0, 0x0, 0x0, ...)
	internal/poll/fd_unix.go:303 +0x252
net.(*netFD).readMsg(0xc000112100, 0xc0001fcc40, 0x10, 0x10, 0xc0001c0020, 0x1000, 0x1000, 0x3900000008000101, 0x0, 0xc0005fe900, ...)
	net/fd_posix.go:78 +0x90
net.(*UnixConn).readMsg(0xc0000ba138, 0xc0001fcc40, 0x10, 0x10, 0xc0001c0020, 0x1000, 0x1000, 0xc000757590, 0xc000151920, 0xfcf450, ...)
	net/unixsock_posix.go:115 +0x91
net.(*UnixConn).ReadMsgUnix(0xc0000ba138, 0xc0001fcc40, 0x10, 0x10, 0xc0001c0020, 0x1000, 0x1000, 0xc00008dda0, 0xc00008ddd0, 0x40da9b, ...)
	net/unixsock.go:143 +0x9d
github.com/godbus/dbus/v5.(*oobReader).Read(0xc0001c0000, 0xc0001fcc40, 0x10, 0x10, 0xc00008de00, 0x40e1f8, 0x18)
	github.com/godbus/dbus/v5@v5.0.4/transport_unix.go:21 +0x8d
io.ReadAtLeast(0xc3b340, 0xc0001c0000, 0xc0001fcc40, 0x10, 0x10, 0x10, 0x0, 0x0, 0x0)
	io/io.go:328 +0x87
io.ReadFull(...)
	io/io.go:347
github.com/godbus/dbus/v5.(*unixTransport).ReadMessage(0xc0000b4288, 0x5d, 0xc000151920, 0xc00000005d)
	github.com/godbus/dbus/v5@v5.0.4/transport_unix.go:91 +0x126
github.com/godbus/dbus/v5.(*Conn).inWorker(0xc00009c000)
	github.com/godbus/dbus/v5@v5.0.4/conn.go:375 +0x52
created by github.com/godbus/dbus/v5.(*Conn).Auth
	github.com/godbus/dbus/v5@v5.0.4/auth.go:118 +0x667

goroutine 103 [IO wait]:
internal/poll.runtime_pollWait(0x7f4d780daa38, 0x72, 0xc3d680)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000070318, 0x72, 0x1, 0x0, 0x0)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).RawRead(0xc000070300, 0xc000401d30, 0x0, 0x0)
	internal/poll/fd_unix.go:659 +0xff
os.(*rawConn).Read(0xc00000e080, 0xc000401d30, 0x1, 0xc000114b60)
	os/rawconn.go:31 +0x65
github.com/mdlayher/socket.(*Conn).read(0xc000150f60, 0xb5e14b, 0x7, 0xc000114b60, 0x48, 0xc000052400)
	github.com/mdlayher/socket@v0.0.0-20210307095302-262dc9984e00/conn.go:404 +0xe7
github.com/mdlayher/socket.(*Conn).Recvmsg(0xc000150f60, 0xc0005e6000, 0x1000, 0x1000, 0x0, 0x0, 0x0, 0x2, 0x48, 0x0, ...)
	github.com/mdlayher/socket@v0.0.0-20210307095302-262dc9984e00/conn.go:344 +0x192
github.com/mdlayher/netlink.(*conn).Receive(0xc00000e088, 0xc000250f60, 0x4, 0x4, 0xc000000280, 0x7783a5)
	github.com/mdlayher/netlink@v1.4.1/conn_linux.go:133 +0xeb
github.com/mdlayher/netlink.(*Conn).receive(0xc0000be800, 0xc0004cd950, 0x203000, 0x203000, 0x203000, 0x10)
	github.com/mdlayher/netlink@v1.4.1/conn.go:273 +0x6f
github.com/mdlayher/netlink.(*Conn).lockedReceive(0xc0000be800, 0xc000052400, 0x0, 0xc0001c9cf8, 0x40e1f8, 0x30)
	github.com/mdlayher/netlink@v1.4.1/conn.go:232 +0x45
github.com/mdlayher/netlink.(*Conn).Receive(0xc0000be800, 0x0, 0x0, 0x0, 0x0, 0x0)
	github.com/mdlayher/netlink@v1.4.1/conn.go:225 +0x7c
tailscale.com/wgengine/monitor.(*nlConn).Receive(0xc000150f90, 0xc3ca00, 0xc0004cd980, 0x0, 0x0)
	tailscale.com@v1.10.2/wgengine/monitor/monitor_linux.go:60 +0xdba
tailscale.com/wgengine/monitor.(*Mon).pump(0xc0000dc370)
	tailscale.com@v1.10.2/wgengine/monitor/monitor.go:235 +0x82
created by tailscale.com/wgengine/monitor.(*Mon).Start
	tailscale.com@v1.10.2/wgengine/monitor/monitor.go:175 +0x147

goroutine 51 [IO wait]:
internal/poll.runtime_pollWait(0x7f4d780da950, 0x72, 0xffffffffffffffff)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc0000bc1f8, 0x72, 0xff01, 0xfff3, 0xffffffffffffffff)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc0000bc1e0, 0xc0003bc04c, 0xfff3, 0xfff3, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:166 +0x1d5
os.(*File).read(...)
	os/file_posix.go:31
os.(*File).Read(0xc0000ba038, 0xc0003bc04c, 0xfff3, 0xfff3, 0xc00019bf3c, 0x2, 0x2)
	os/file.go:119 +0x77
golang.zx2c4.com/wireguard/tun.(*NativeTun).Read(0xc000194000, 0xc0003bc040, 0xffff, 0xffff, 0x10, 0x1, 0x0, 0x1)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/tun/tun_linux.go:372 +0x15f
tailscale.com/net/tstun.(*Wrapper).poll(0xc0003bc000)
	tailscale.com@v1.10.2/net/tstun/wrap.go:248 +0x10d
created by tailscale.com/net/tstun.Wrap
	tailscale.com@v1.10.2/net/tstun/wrap.go:144 +0x1e8

goroutine 52 [select, 834 minutes]:
tailscale.com/net/tstun.(*Wrapper).pumpEvents(0xc0003bc000)
	tailscale.com@v1.10.2/net/tstun/wrap.go:182 +0x125
created by tailscale.com/net/tstun.Wrap
	tailscale.com@v1.10.2/net/tstun/wrap.go:145 +0x20a

goroutine 53 [select, 834 minutes]:
golang.zx2c4.com/wireguard/ratelimiter.(*Ratelimiter).Init.func1(0xc000094ba0, 0xc0000ce5c0)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/ratelimiter/ratelimiter.go:70 +0xad
created by golang.zx2c4.com/wireguard/ratelimiter.(*Ratelimiter).Init
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/ratelimiter/ratelimiter.go:66 +0x105

goroutine 54 [semacquire, 834 minutes]:
sync.runtime_Semacquire(0xc0004725c8)
	runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0004725c0)
	sync/waitgroup.go:130 +0x65
golang.zx2c4.com/wireguard/device.newHandshakeQueue.func1(0xc0004725b8)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/channels.go:68 +0x31
created by golang.zx2c4.com/wireguard/device.newHandshakeQueue
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/channels.go:67 +0xa7

goroutine 55 [semacquire, 834 minutes]:
sync.runtime_Semacquire(0xc0004725e0)
	runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0004725d8)
	sync/waitgroup.go:130 +0x65
golang.zx2c4.com/wireguard/device.newOutboundQueue.func1(0xc0004725d0)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/channels.go:32 +0x31
created by golang.zx2c4.com/wireguard/device.newOutboundQueue
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/channels.go:31 +0xa7

goroutine 56 [semacquire, 834 minutes]:
sync.runtime_Semacquire(0xc0004725f8)
	runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0004725f0)
	sync/waitgroup.go:130 +0x65
golang.zx2c4.com/wireguard/device.newInboundQueue.func1(0xc0004725e8)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/channels.go:50 +0x31
created by golang.zx2c4.com/wireguard/device.newInboundQueue
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/channels.go:49 +0xa7

goroutine 57 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineEncryption(0xc0000ce500, 0x1)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/send.go:376 +0x218
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:307 +0x235

goroutine 58 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineDecryption(0xc0000ce500, 0x1)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:212 +0x20b
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:308 +0x265

goroutine 59 [chan receive, 1 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineHandshake(0xc0000ce500, 0x1)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:244 +0x1a5
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:309 +0x291

goroutine 60 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineEncryption(0xc0000ce500, 0x2)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/send.go:376 +0x218
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:307 +0x235

goroutine 61 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineDecryption(0xc0000ce500, 0x2)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:212 +0x20b
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:308 +0x265

goroutine 62 [chan receive, 1 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineHandshake(0xc0000ce500, 0x2)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:244 +0x1a5
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:309 +0x291

goroutine 63 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineEncryption(0xc0000ce500, 0x3)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/send.go:376 +0x218
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:307 +0x235

goroutine 64 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineDecryption(0xc0000ce500, 0x3)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:212 +0x20b
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:308 +0x265

goroutine 65 [chan receive, 1 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineHandshake(0xc0000ce500, 0x3)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:244 +0x1a5
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:309 +0x291

goroutine 66 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineEncryption(0xc0000ce500, 0x4)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/send.go:376 +0x218
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:307 +0x235

goroutine 67 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineDecryption(0xc0000ce500, 0x4)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:212 +0x20b
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:308 +0x265

goroutine 68 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineHandshake(0xc0000ce500, 0x4)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:244 +0x1a5
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:309 +0x291

goroutine 69 [select]:
tailscale.com/net/tstun.(*Wrapper).Read(0xc0003bc000, 0xc0006c2000, 0xffff, 0xffff, 0x10, 0x0, 0x0, 0x0)
	tailscale.com@v1.10.2/net/tstun/wrap.go:332 +0x11a
golang.zx2c4.com/wireguard/device.(*Device).RoutineReadFromTUN(0xc0000ce500)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/send.go:228 +0xee
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:314 +0x30b

goroutine 70 [chan receive, 834 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineTUNEventReader(0xc0000ce500)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/tun.go:20 +0xb5
created by golang.zx2c4.com/wireguard/device.NewDevice
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:315 +0x32d

goroutine 71 [chan receive, 834 minutes]:
tailscale.com/wgengine.NewUserspaceEngine.func5(0xc0000cc480)
	tailscale.com@v1.10.2/wgengine/userspace.go:339 +0x5a
created by tailscale.com/wgengine.NewUserspaceEngine
	tailscale.com@v1.10.2/wgengine/userspace.go:337 +0xba7

goroutine 72 [IO wait]:
internal/poll.runtime_pollWait(0x7f4d780da5b0, 0x72, 0x0)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc00024b398, 0x72, 0xff00, 0xffff, 0x0)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).ReadFromInet4(0xc00024b380, 0xc000708000, 0xffff, 0xffff, 0xc0001abbb0, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:249 +0x1b5
net.(*netFD).readFromInet4(0xc00024b380, 0xc000708000, 0xffff, 0xffff, 0xc0001abbb0, 0x5c, 0x0, 0x0)
	net/fd_posix.go:66 +0x5f
net.(*UDPConn).readFrom(0xc00000e0a0, 0xc000708000, 0xffff, 0xffff, 0xc0001abd68, 0x4, 0xc0001abcc8, 0x876848, 0xc0004724f8)
	net/udpsock_posix.go:51 +0x2b9
net.(*UDPConn).readFromUDP(0xc00000e0a0, 0xc000708000, 0xffff, 0xffff, 0xc0001abd68, 0x0, 0xc0000d295c, 0xb90258, 0xc0001abd98)
	net/udpsock.go:115 +0x6a
net.(*UDPConn).ReadFromUDP(...)
	net/udpsock.go:107
tailscale.com/wgengine/magicsock.(*RebindingUDPConn).ReadFromNetaddr(0xc0004724f8, 0xc000708000, 0xffff, 0xffff, 0x0, 0xffffc0a80119, 0xc0000b40a8, 0xc0000ba2a9, 0xc0000d28e8, 0x0, ...)
	tailscale.com@v1.10.2/wgengine/magicsock/magicsock.go:2854 +0xdf
tailscale.com/wgengine/magicsock.(*Conn).receiveIPv4(0xc0000d2840, 0xc000708000, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	tailscale.com@v1.10.2/wgengine/magicsock/magicsock.go:1605 +0xd4
golang.zx2c4.com/wireguard/device.(*Device).RoutineReceiveIncoming(0xc0000ce500, 0xc00046e700)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:94 +0x1b7
created by golang.zx2c4.com/wireguard/device.(*Device).BindUpdate
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:506 +0x37d

goroutine 73 [IO wait]:
internal/poll.runtime_pollWait(0x7f4d780da3e0, 0x72, 0x0)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc00024b418, 0x72, 0xff00, 0xffff, 0x0)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).ReadFromInet6(0xc00024b400, 0xc000510000, 0xffff, 0xffff, 0xc000486bd0, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:276 +0x1b5
net.(*netFD).readFromInet6(0xc00024b400, 0xc000510000, 0xffff, 0xffff, 0xc000486bd0, 0x0, 0xc3b1c0, 0xc00009e1f0)
	net/fd_posix.go:72 +0x5f
net.(*UDPConn).readFrom(0xc00000e0b0, 0xc000510000, 0xffff, 0xffff, 0xc000486d68, 0x0, 0xc000486cc8, 0x876848, 0xc000472528)
	net/udpsock_posix.go:58 +0x105
net.(*UDPConn).readFromUDP(0xc00000e0b0, 0xc000510000, 0xffff, 0xffff, 0xc000486d68, 0x0, 0x0, 0xc3c340, 0xc0000c1540)
	net/udpsock.go:115 +0x6a
net.(*UDPConn).ReadFromUDP(...)
	net/udpsock.go:107
tailscale.com/wgengine/magicsock.(*RebindingUDPConn).ReadFromNetaddr(0xc000472528, 0xc000510000, 0xffff, 0xffff, 0xffff, 0xa56120, 0x1, 0xc000510000, 0xc000486e08, 0x83ff4d, ...)
	tailscale.com@v1.10.2/wgengine/magicsock/magicsock.go:2854 +0xdf
tailscale.com/wgengine/magicsock.(*Conn).receiveIPv6(0xc0000d2840, 0xc000510000, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	tailscale.com@v1.10.2/wgengine/magicsock/magicsock.go:1590 +0xd4
golang.zx2c4.com/wireguard/device.(*Device).RoutineReceiveIncoming(0xc0000ce500, 0xc00046e710)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:94 +0x1b7
created by golang.zx2c4.com/wireguard/device.(*Device).BindUpdate
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:506 +0x37d

goroutine 74 [chan receive]:
tailscale.com/wgengine/magicsock.(*connBind).receiveDERP(0xc0004724e0, 0xc0006a2000, 0xffff, 0xffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	tailscale.com@v1.10.2/wgengine/magicsock/magicsock.go:1658 +0xfe
golang.zx2c4.com/wireguard/device.(*Device).RoutineReceiveIncoming(0xc0000ce500, 0xc00046e720)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:94 +0x1b7
created by golang.zx2c4.com/wireguard/device.(*Device).BindUpdate
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/device.go:506 +0x37d

goroutine 115 [chan receive, 1 minutes]:
tailscale.com/control/controlclient.(*Auto).authRoutine(0xc00009c3c0)
	tailscale.com@v1.10.2/control/controlclient/auto.go:294 +0xab5
created by tailscale.com/control/controlclient.(*Auto).Start
	tailscale.com@v1.10.2/control/controlclient/auto.go:150 +0x3f

goroutine 113 [chan receive]:
tailscale.com/ipn/ipnlocal.(*LocalBackend).readPoller(0xc0000ccfc0)
	tailscale.com@v1.10.2/ipn/ipnlocal/local.go:1084 +0x2ad
created by tailscale.com/ipn/ipnlocal.(*LocalBackend).Start.func1
	tailscale.com@v1.10.2/ipn/ipnlocal/local.go:795 +0x8c

goroutine 16 [select]:
tailscale.com/portlist.(*Poller).Run(0xc0000be2c0, 0xc48af8, 0xc0000be280, 0x0, 0x0)
	tailscale.com@v1.10.2/portlist/poller.go:77 +0x17c
created by tailscale.com/ipn/ipnlocal.(*LocalBackend).Start.func1
	tailscale.com@v1.10.2/ipn/ipnlocal/local.go:794 +0x6a

goroutine 104 [select]:
tailscale.com/wgengine/monitor.(*Mon).debounce(0xc0000dc370)
	tailscale.com@v1.10.2/wgengine/monitor/monitor.go:257 +0x1d5
created by tailscale.com/wgengine/monitor.(*Mon).Start
	tailscale.com@v1.10.2/wgengine/monitor/monitor.go:176 +0x169

goroutine 106 [select]:
tailscale.com/net/dns/resolver.(*Resolver).NextResponse(0xc00024a580, 0xe, 0xc000138060, 0x1, 0x1, 0x0, 0x0, 0x31320000, 0xc3cba0, 0xfcf160)
	tailscale.com@v1.10.2/net/dns/resolver/tsdns.go:205 +0xd8
tailscale.com/net/dns.(*Manager).NextResponse(...)
	tailscale.com@v1.10.2/net/dns/manager.go:190
tailscale.com/wgengine.(*userspaceEngine).pollResolver(0xc0000cc480)
	tailscale.com@v1.10.2/wgengine/userspace.go:439 +0x4b
created by tailscale.com/wgengine.NewUserspaceEngine
	tailscale.com@v1.10.2/wgengine/userspace.go:372 +0xdee

goroutine 87 [syscall, 834 minutes]:
os/signal.signal_recv(0x542e6b636174732f)
	runtime/sigqueue.go:168 +0xa5
os/signal.loop()
	os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
	os/signal/signal.go:151 +0x45

goroutine 88 [select, 834 minutes]:
main.run.func6(0xc0004760c0, 0xc000030140, 0xc0001d0050, 0xc48af8, 0xc0004b2080)
	tailscale.com@v1.10.2/cmd/tailscaled/tailscaled.go:292 +0xad
created by main.run
	tailscale.com@v1.10.2/cmd/tailscaled/tailscaled.go:291 +0x5b6

goroutine 89 [select, 834 minutes]:
tailscale.com/ipn/ipnserver.Run.func1(0xc48af8, 0xc0004b2080, 0xc000406000, 0xc00010a1c0, 0xc48010, 0xc0004aa060)
	tailscale.com@v1.10.2/ipn/ipnserver/server.go:605 +0x87
created by tailscale.com/ipn/ipnserver.Run
	tailscale.com@v1.10.2/ipn/ipnserver/server.go:604 +0x278

goroutine 151048 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Peer).RoutineSequentialReceiver(0xc00075ee00)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:406 +0x16d
created by golang.zx2c4.com/wireguard/device.(*Peer).Start
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/peer.go:202 +0x29c

goroutine 151876 [select]:
tailscale.com/control/controlclient.(*Direct).sendMapRequest.func1(0xc00045a300, 0xb90618, 0xc000030460, 0xc00000a1e0, 0xc0002563a0, 0xc00045a2a0)
	tailscale.com@v1.10.2/control/controlclient/direct.go:717 +0xf8
created by tailscale.com/control/controlclient.(*Direct).sendMapRequest
	tailscale.com@v1.10.2/control/controlclient/direct.go:715 +0x11b9

goroutine 151796 [IO wait]:
internal/poll.runtime_pollWait(0x7f4d780da210, 0x72, 0xffffffffffffffff)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc00024b618, 0x72, 0x1500, 0x1513, 0xffffffffffffffff)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00024b600, 0xc0004ed800, 0x1513, 0x1513, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:166 +0x1d5
net.(*netFD).Read(0xc00024b600, 0xc0004ed800, 0x1513, 0x1513, 0x150e, 0xc0004ed800, 0x5)
	net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc0000ba760, 0xc0004ed800, 0x1513, 0x1513, 0x0, 0x0, 0x0)
	net/net.go:183 +0x91
crypto/tls.(*atLeastReader).Read(0xc0000b40d8, 0xc0004ed800, 0x1513, 0x1513, 0x150e, 0xc000052400, 0x0)
	crypto/tls/conn.go:776 +0x63
bytes.(*Buffer).ReadFrom(0xc00075e5f8, 0xc3afa0, 0xc0000b40d8, 0x40b825, 0xa87a20, 0xb2ed00)
	bytes/buffer.go:204 +0xbe
crypto/tls.(*Conn).readFromUntil(0xc00075e380, 0xc3c380, 0xc0000ba760, 0x5, 0xc0000ba760, 0xbd)
	crypto/tls/conn.go:798 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc00075e380, 0x0, 0x0, 0x0)
	crypto/tls/conn.go:605 +0x115
crypto/tls.(*Conn).readRecord(...)
	crypto/tls/conn.go:573
crypto/tls.(*Conn).Read(0xc00075e380, 0xc00043e000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	crypto/tls/conn.go:1276 +0x165
bufio.(*Reader).fill(0xc00049e360)
	bufio/bufio.go:101 +0x108
bufio.(*Reader).ReadByte(0xc00049e360, 0x2da2d2a1623e, 0xf9f420, 0x0)
	bufio/bufio.go:253 +0x39
tailscale.com/derp.readFrameHeader(0xc00049e360, 0xb8, 0xb8, 0x0)
	tailscale.com@v1.10.2/derp/derp.go:147 +0x2f
tailscale.com/derp.(*Client).recvTimeout(0xc0006ee480, 0x1bf08eb000, 0x0, 0x0, 0x0, 0x0)
	tailscale.com@v1.10.2/derp/derp_client.go:394 +0x116
tailscale.com/derp.(*Client).Recv(...)
	tailscale.com@v1.10.2/derp/derp_client.go:368
tailscale.com/derp/derphttp.(*Client).RecvDetail(0xc0006ee300, 0xc000639ab8, 0x0, 0x0, 0x2, 0xf9f401)
	tailscale.com@v1.10.2/derp/derphttp/derphttp_client.go:746 +0xa5
tailscale.com/wgengine/magicsock.(*Conn).runDerpReader(0xc0000d2840, 0xc48af8, 0xc0001e4a80, 0x0, 0xffff7f030328, 0xc0000b40a8, 0x2, 0xc0006ee300, 0xc000496200, 0xc0001935c0)
	tailscale.com@v1.10.2/wgengine/magicsock/magicsock.go:1429 +0x6c5
created by tailscale.com/wgengine/magicsock.(*Conn).derpWriteChanOfAddr
	tailscale.com@v1.10.2/wgengine/magicsock/magicsock.go:1337 +0xb05

goroutine 151875 [select]:
net/http.http2awaitRequestCancel(0xc00050a600, 0xc000095ec0, 0xc0006ee180, 0xc48af8)
	net/http/h2_bundle.go:6820 +0xe5
net/http.(*http2clientStream).awaitRequestCancel(0xc00009ab00, 0xc00050a600)
	net/http/h2_bundle.go:6846 +0x45
created by net/http.(*http2clientConnReadLoop).handleResponse
	net/http/h2_bundle.go:8557 +0x728

goroutine 135509 [IO wait]:
internal/poll.runtime_pollWait(0x7f4d780da128, 0x72, 0xffffffffffffffff)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc00024a298, 0x72, 0x1300, 0x13bf, 0xffffffffffffffff)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00024a280, 0xc00047ea00, 0x13bf, 0x13bf, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:166 +0x1d5
net.(*netFD).Read(0xc00024a280, 0xc00047ea00, 0x13bf, 0x13bf, 0x13ba, 0xc00047ea00, 0x5)
	net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc000474008, 0xc00047ea00, 0x13bf, 0x13bf, 0x0, 0x0, 0x0)
	net/net.go:183 +0x91
crypto/tls.(*atLeastReader).Read(0xc0001dc2a0, 0xc00047ea00, 0x13bf, 0x13bf, 0xc0005239f8, 0xc0003fc400, 0x0)
	crypto/tls/conn.go:776 +0x63
bytes.(*Buffer).ReadFrom(0xc0004c8978, 0xc3afa0, 0xc0001dc2a0, 0x40b825, 0xa87a20, 0xb2ed00)
	bytes/buffer.go:204 +0xbe
crypto/tls.(*Conn).readFromUntil(0xc0004c8700, 0xc3c380, 0xc000474008, 0x5, 0xc000474008, 0xa)
	crypto/tls/conn.go:798 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc0004c8700, 0x0, 0x0, 0xc000523e88)
	crypto/tls/conn.go:605 +0x115
crypto/tls.(*Conn).readRecord(...)
	crypto/tls/conn.go:573
crypto/tls.(*Conn).Read(0xc0004c8700, 0xc00058f000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	crypto/tls/conn.go:1276 +0x165
net/http.(*persistConn).Read(0xc0001ce240, 0xc00058f000, 0x1000, 0x1000, 0xc000523dc4, 0x2, 0x2)
	net/http/transport.go:1933 +0x77
bufio.(*Reader).fill(0xc000707d40)
	bufio/bufio.go:101 +0x108
bufio.(*Reader).Peek(0xc000707d40, 0x1, 0x0, 0x1, 0x1, 0x1, 0x0)
	bufio/bufio.go:139 +0x4f
net/http.(*persistConn).readLoop(0xc0001ce240)
	net/http/transport.go:2094 +0x1a8
created by net/http.(*Transport).dialConn
	net/http/transport.go:1754 +0xc73

goroutine 151109 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Peer).RoutineSequentialReceiver(0xc00075f880)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:406 +0x16d
created by golang.zx2c4.com/wireguard/device.(*Peer).Start
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/peer.go:202 +0x29c

goroutine 150985 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7f4d77da49b0, 0x72, 0x0)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000438e18, 0x72, 0x0, 0x0, 0xb5dedc)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc000438e00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:507 +0x212
net.(*netFD).accept(0xc000438e00, 0x440c12, 0x0, 0xc000603120)
	net/fd_unix.go:172 +0x45
net.(*TCPListener).accept(0xc0003eb650, 0xc0006030e0, 0xc000676208, 0xc0003df6d0)
	net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc0003eb650, 0xc0003df650, 0x46ec6b, 0xc000420a08, 0xc000476c01)
	net/tcpsock.go:261 +0x65
tailscale.com/ipn/ipnlocal.(*peerAPIListener).serve(0xc0000c0730)
	tailscale.com@v1.10.2/ipn/ipnlocal/peerapi.go:427 +0xa4
created by tailscale.com/ipn/ipnlocal.(*LocalBackend).initPeerAPIListener
	tailscale.com@v1.10.2/ipn/ipnlocal/local.go:2005 +0x67c

goroutine 151047 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Peer).RoutineSequentialSender(0xc00075ee00)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/send.go:418 +0x12b
created by golang.zx2c4.com/wireguard/device.(*Peer).Start
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/peer.go:201 +0x277

goroutine 151060 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Peer).RoutineSequentialReceiver(0xc00075f180)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/receive.go:406 +0x16d
created by golang.zx2c4.com/wireguard/device.(*Peer).Start
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/peer.go:202 +0x29c

goroutine 151797 [select]:
tailscale.com/wgengine/magicsock.(*Conn).runDerpWriter(0xc0000d2840, 0xc48af8, 0xc0001e4a80, 0xc0006ee300, 0xc000706a20, 0xc000496200, 0xc0001935c0)
	tailscale.com@v1.10.2/wgengine/magicsock/magicsock.go:1537 +0x1b9
created by tailscale.com/wgengine/magicsock.(*Conn).derpWriteChanOfAddr
	tailscale.com@v1.10.2/wgengine/magicsock/magicsock.go:1338 +0xb78

goroutine 151059 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Peer).RoutineSequentialSender(0xc00075f180)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/send.go:418 +0x12b
created by golang.zx2c4.com/wireguard/device.(*Peer).Start
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/peer.go:201 +0x277

goroutine 150984 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7f4d780d9f58, 0x72, 0x0)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000438d98, 0x72, 0x0, 0x0, 0xb5dedc)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Accept(0xc000438d80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:507 +0x212
net.(*netFD).accept(0xc000438d80, 0xc0000b40a8, 0x0, 0x0)
	net/fd_unix.go:172 +0x45
net.(*TCPListener).accept(0xc0003eb620, 0xd96, 0x0, 0xffff672b4b31)
	net/tcpsock_posix.go:139 +0x32
net.(*TCPListener).Accept(0xc0003eb620, 0x1, 0x4010106, 0xc000158074, 0xb90258)
	net/tcpsock.go:261 +0x65
tailscale.com/ipn/ipnlocal.(*peerAPIListener).serve(0xc0000c06e0)
	tailscale.com@v1.10.2/ipn/ipnlocal/peerapi.go:427 +0xa4
created by tailscale.com/ipn/ipnlocal.(*LocalBackend).initPeerAPIListener
	tailscale.com@v1.10.2/ipn/ipnlocal/local.go:2005 +0x67c

goroutine 135510 [select]:
net/http.(*persistConn).writeLoop(0xc0001ce240)
	net/http/transport.go:2393 +0xf7
created by net/http.(*Transport).dialConn
	net/http/transport.go:1755 +0xc98

goroutine 152468 [IO wait]:
internal/poll.runtime_pollWait(0x7f4d77da4b80, 0x72, 0xffffffffffffffff)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000112498, 0x72, 0x1000, 0x1000, 0xffffffffffffffff)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000112480, 0xc0005c6000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:166 +0x1d5
net.(*netFD).Read(0xc000112480, 0xc0005c6000, 0x1000, 0x1000, 0xc000034420, 0xc00042e201, 0xc0006127c0)
	net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc0004741c0, 0xc0005c6000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	net/net.go:183 +0x91
bufio.(*Reader).Read(0xc000070c00, 0xc00040cfd1, 0x1, 0x1, 0xc0000d2840, 0x0, 0xffffc0a80119)
	bufio/bufio.go:227 +0x222
tailscale.com/ipn/ipnserver.(*protoSwitchConn).Read(0xc00040cc00, 0xc00040cfd1, 0x1, 0x1, 0x0, 0x0, 0x0)
	tailscale.com@v1.10.2/ipn/ipnserver/server.go:918 +0x4d
net/http.(*connReader).backgroundRead(0xc00040cfc0)
	net/http/server.go:672 +0x58
created by net/http.(*connReader).startBackgroundRead
	net/http/server.go:668 +0xd5

goroutine 135622 [IO wait]:
internal/poll.runtime_pollWait(0x7f4d780da698, 0x72, 0xffffffffffffffff)
	runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000439a98, 0x72, 0x3100, 0x31dd, 0xffffffffffffffff)
	internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000439a80, 0xc000698000, 0x31dd, 0x31dd, 0x0, 0x0, 0x0)
	internal/poll/fd_unix.go:166 +0x1d5
net.(*netFD).Read(0xc000439a80, 0xc000698000, 0x31dd, 0x31dd, 0x31d8, 0xc000698000, 0x5)
	net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc00000e018, 0xc000698000, 0x31dd, 0x31dd, 0x0, 0x0, 0x0)
	net/net.go:183 +0x91
crypto/tls.(*atLeastReader).Read(0xc0000b42b8, 0xc000698000, 0x31dd, 0x31dd, 0x31d8, 0xc000052c00, 0x0)
	crypto/tls/conn.go:776 +0x63
bytes.(*Buffer).ReadFrom(0xc00018e5f8, 0xc3afa0, 0xc0000b42b8, 0x40b825, 0xa87a20, 0xb2ed00)
	bytes/buffer.go:204 +0xbe
crypto/tls.(*Conn).readFromUntil(0xc00018e380, 0xc3c380, 0xc00000e018, 0x5, 0xc00000e018, 0x400)
	crypto/tls/conn.go:798 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc00018e380, 0x0, 0x0, 0x2)
	crypto/tls/conn.go:605 +0x115
crypto/tls.(*Conn).readRecord(...)
	crypto/tls/conn.go:573
crypto/tls.(*Conn).Read(0xc00018e380, 0xc00013d000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	crypto/tls/conn.go:1276 +0x165
bufio.(*Reader).Read(0xc0005a1c20, 0xc00014e498, 0x9, 0x9, 0xc000603320, 0xc00009ab48, 0xc00048bc68)
	bufio/bufio.go:227 +0x222
io.ReadAtLeast(0xc3ae80, 0xc0005a1c20, 0xc00014e498, 0x9, 0x9, 0x9, 0x0, 0xc4a34ad1ecad01, 0xc00009ab30)
	io/io.go:328 +0x87
io.ReadFull(...)
	io/io.go:347
net/http.http2readFrameHeader(0xc00014e498, 0x9, 0x9, 0xc3ae80, 0xc0005a1c20, 0x0, 0x0, 0x0, 0x0)
	net/http/h2_bundle.go:1477 +0x89
net/http.(*http2Framer).ReadFrame(0xc00014e460, 0xc000456480, 0x0, 0x0, 0x0)
	net/http/h2_bundle.go:1735 +0xa5
net/http.(*http2clientConnReadLoop).run(0xc00048bfa8, 0x0, 0x0)
	net/http/h2_bundle.go:8322 +0xd8
net/http.(*http2ClientConn).readLoop(0xc000254c00)
	net/http/h2_bundle.go:8244 +0x6f
created by net/http.(*http2Transport).newClientConn
	net/http/h2_bundle.go:7208 +0x6c5

goroutine 151108 [chan receive]:
golang.zx2c4.com/wireguard/device.(*Peer).RoutineSequentialSender(0xc00075f880)
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/send.go:418 +0x12b
created by golang.zx2c4.com/wireguard/device.(*Peer).Start
	golang.zx2c4.com/wireguard@v0.0.0-20210525143454-64cb82f2b3f5/device/peer.go:201 +0x277
frioux

comment created time in 9 days

issue commenttailscale/tailscale

Tailscale DNS stops working after suspend

systemd 245 (245.4-4ubuntu3.7)
+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid

Only DNS breaks, I can use tailscale status and connect via ip address just fine.

$ ip rule
0:      from all lookup local
5210:   from all fwmark 0x80000 lookup main
5230:   from all fwmark 0x80000 lookup default
5250:   from all fwmark 0x80000 unreachable
5270:   from all lookup 52
32766:  from all lookup main
32767:  from all lookup default
frioux

comment created time in 9 days

issue openedtailscale/tailscale

Tailscale DNS stops working after suspend

I am pretty sure this is a regression and didn't happen at some point before. My current tailscale version is 1.10.2 and I'm on 20.04 Ubuntu.

created time in 9 days

push eventfrioux/leatherman

Arthur Axel 'fREW' Schmidt

commit sha 89074a8324605e3a4707d77fdec3f87da093c6e4

.gitpod.yml: enable prebuilds

view details

push time in 13 days

push eventfrioux/blog

Shannon Barrett

commit sha 3570e47b1e417d2e5725d33ad24901e846636798

Fixed small spelling mistake

view details

push time in 15 days

PR merged frioux/blog

Fixed small spelling mistake
+1 -1

0 comment

1 changed file

shiitake

pr closed time in 15 days

push eventfrioux/leatherman

dependabot[bot]

commit sha e5057f465959fc5c526a5ce6d9988cb3a52a7d51

build(deps): bump github.com/PuerkitoBio/goquery from 1.7.0 to 1.7.1 Bumps [github.com/PuerkitoBio/goquery](https://github.com/PuerkitoBio/goquery) from 1.7.0 to 1.7.1. - [Release notes](https://github.com/PuerkitoBio/goquery/releases) - [Commits](https://github.com/PuerkitoBio/goquery/compare/v1.7.0...v1.7.1) --- updated-dependencies: - dependency-name: github.com/PuerkitoBio/goquery dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com>

view details

push time in 16 days

PR merged frioux/leatherman

build(deps): bump github.com/PuerkitoBio/goquery from 1.7.0 to 1.7.1 dependencies

Bumps github.com/PuerkitoBio/goquery from 1.7.0 to 1.7.1. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/PuerkitoBio/goquery/commit/372d5bdaa0efb08f1f90c6791601c565674d64af"><code>372d5bd</code></a> Add changelog</li> <li><a href="https://github.com/PuerkitoBio/goquery/commit/5dfda0e354f796efe53897e22c57b61f626e042c"><code>5dfda0e</code></a> Merge branch 'jauderho-update-gomod'</li> <li><a href="https://github.com/PuerkitoBio/goquery/commit/d6951d24bb0d3a798f0e5fbf5c45a114a1203f54"><code>d6951d2</code></a> Update go modules</li> <li><a href="https://github.com/PuerkitoBio/goquery/commit/c09491e08b5b38776b5c411c11e472ca3c2fcf63"><code>c09491e</code></a> Update dependabot.yml</li> <li><a href="https://github.com/PuerkitoBio/goquery/commit/02e233c33a9296530078da066b05d26ec305f05c"><code>02e233c</code></a> Create dependabot.yml</li> <li>See full diff in <a href="https://github.com/PuerkitoBio/goquery/compare/v1.7.0...v1.7.1">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

</details>

+9 -8

0 comment

2 changed files

dependabot[bot]

pr closed time in 16 days

issue commenttailscale/tailscale

`tailscale up` prints "context canceled" on some failures

FWIW I found that I can reproduce this by just running down and up immediately after each other:

$ sudo tailscale down && sudo tailscale up
[sudo] password for frew: 
context canceled
$ sudo tailscale up                       
$ sudo tailscale down && sudo tailscale up
$ sudo tailscale down && sudo tailscale up
context canceled
$ sudo tailscale down && sudo tailscale up
$ sudo tailscale down && sudo tailscale up
context canceled

frioux

comment created time in 20 days

issue commenttailscale/tailscale

Weird error when running `tailscale up`

Here's what I think are some relevant logs:

Jul 05 12:47:24 caliburn tailscaled[592417]: Switching ipn state Running -> Stopped (WantRunning=false, nm=true)
Jul 05 12:47:24 caliburn tailscaled[592417]: control: setPaused(true)
Jul 05 12:47:24 caliburn tailscaled[592417]: health("overall"): error: state=Stopped, wantRunning=false
Jul 05 12:47:24 caliburn tailscaled[592417]: magicsock: SetPrivateKey called (zeroed)
Jul 05 12:47:24 caliburn tailscaled[592417]: magicsock: closing connection to derp-2 (zero-private-key), age 14s
Jul 05 12:47:24 caliburn tailscaled[592417]: [RATELIMIT] format("magicsock: %v active derp conns%s") (3 dropped)
Jul 05 12:47:24 caliburn tailscaled[592417]: magicsock: 0 active derp conns
Jul 05 12:47:24 caliburn tailscaled[592417]: wgengine: Reconfig: configuring userspace wireguard config (with 0/0 peers)
Jul 05 12:47:24 caliburn tailscaled[592417]: control: mapRoutine: paused
Jul 05 12:47:24 caliburn tailscaled[592417]: control: mapRoutine: awaiting unpause
Jul 05 12:47:24 caliburn tailscaled[592417]: wgengine: Reconfig: configuring router
Jul 05 12:47:25 caliburn tailscaled[592417]: control: HostInfo: {"IPNVersion":"1.10.1-t6b6016130-g49e1dcc20","BackendLogID":"5286a105daf3d974b6241f786c0fd2ca69341f9da4966d5454cbafc2ff8280c8","OS":"linux","OSVersi>
Jul 05 12:47:25 caliburn tailscaled[592417]: magicsock: ReSTUN("link-change-minor") ignored; stopped, no private key
Jul 05 12:47:25 caliburn tailscaled[592417]: [RATELIMIT] format("monitor: %s: src=%v, dst=%v, gw=%v, outif=%v, table=%v") (1 dropped)
Jul 05 12:47:25 caliburn tailscaled[592417]: monitor: RTM_DELROUTE: src=, dst=fd7a:115c:a1e0::/48, gw=, outif=25, table=52
Jul 05 12:47:25 caliburn tailscaled[592417]: monitor: RTM_DELROUTE: src=100.121.173.4/0, dst=100.121.173.4/32, gw=, outif=25, table=255
Jul 05 12:47:25 caliburn tailscaled[592417]: [RATELIMIT] format("monitor: %s: src=%v, dst=%v, gw=%v, outif=%v, table=%v")
Jul 05 12:47:25 caliburn tailscaled[592417]: wgengine: Reconfig: configuring DNS
Jul 05 12:47:25 caliburn tailscaled[592417]: dns: Set: {DefaultResolvers:[] Routes:map[] SearchDomains:[] Hosts:map[]}
Jul 05 12:47:25 caliburn tailscaled[592417]: dns: Resolvercfg: {Routes:map[] Hosts:map[] LocalDomains:[]}
Jul 05 12:47:25 caliburn tailscaled[592417]: dns: OScfg: {Nameservers:[] SearchDomains:[] MatchDomains:[]}
Jul 05 12:47:25 caliburn tailscaled[592417]: magicsock: ReSTUN("link-change-minor") ignored; stopped, no private key
Jul 05 12:47:29 caliburn tailscaled[592417]: ipnserver: conn4: connection from userid 0; root has access
Jul 05 12:47:29 caliburn tailscaled[592417]: EditPrefs: MaskedPrefs{WantRunning=true}
Jul 05 12:47:29 caliburn tailscaled[592417]: transitioning to running; doing Login...
Jul 05 12:47:29 caliburn tailscaled[592417]: control: client.Login(false, 0)
Jul 05 12:47:29 caliburn tailscaled[592417]: Switching ipn state Stopped -> Starting (WantRunning=true, nm=true)
Jul 05 12:47:29 caliburn tailscaled[592417]: control: setPaused(false)
Jul 05 12:47:29 caliburn tailscaled[592417]: control: mapRoutine: unpaused
Jul 05 12:47:29 caliburn tailscaled[592417]: magicsock: SetPrivateKey called (init)
Jul 05 12:47:29 caliburn tailscaled[592417]: magicsock: private key changed, reconnecting to home derp-2
Jul 05 12:47:29 caliburn tailscaled[592417]: wgengine: Reconfig: configuring userspace wireguard config (with 0/7 peers)
Jul 05 12:47:29 caliburn tailscaled[592417]: magicsock: adding connection to derp-2 for home-keep-alive
Jul 05 12:47:29 caliburn tailscaled[592417]: magicsock: 1 active derp conns: derp-2=cr0s,wr0s
Jul 05 12:47:29 caliburn tailscaled[592417]: [RATELIMIT] format("%s: connecting to derp-%d (%v)") (16 dropped)
Jul 05 12:47:29 caliburn tailscaled[592417]: derphttp.Client.Recv: connecting to derp-2 (sfo)
Jul 05 12:47:29 caliburn tailscaled[592417]: control: authRoutine: state:authenticated; wantLoggedIn=true
Jul 05 12:47:29 caliburn tailscaled[592417]: wgengine: Reconfig: configuring router
Jul 05 12:47:29 caliburn tailscaled[592417]: control: direct.TryLogin(token=false, flags=0)
Jul 05 12:47:29 caliburn tailscaled[592417]: control: mapRoutine: state:authenticating
Jul 05 12:47:29 caliburn tailscaled[592417]: control: doLogin(regen=false, hasUrl=false)
Jul 05 12:47:29 caliburn tailscaled[592417]: control: RegisterReq: onode=[Jmvn9] node=[Wa0gm] fup=false
Jul 05 12:47:29 caliburn tailscaled[592417]: magicsock: derp-2 connected; connGen=1
Jul 05 12:47:29 caliburn tailscaled[592417]: control: RegisterReq: got response; nodeKeyExpired=false, machineAuthorized=true; authURL=false
frioux

comment created time in 22 days

issue openedtailscale/tailscale

Weird error when running `tailscale up`

I'm a go programmer, so I have a hunch as to what happened, but I suspect this could use some polish:

$ sudo tailscale up 
context canceled
# exit code = 1
$ sudo tailscale up 

I can probably dig in the logs later if you want, but the machine is off right now.

created time in 22 days

PR merged frioux/leatherman

build(deps): bump github.com/yuin/goldmark from 1.3.9 to 1.4.0 dependencies

Bumps github.com/yuin/goldmark from 1.3.9 to 1.4.0. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/yuin/goldmark/commit/5588d92a56fe1642791cf4aa8e9eae8227cfeecd"><code>5588d92</code></a> Support CommonMark 0.30</li> <li>See full diff in <a href="https://github.com/yuin/goldmark/compare/v1.3.9...v1.4.0">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

</details>

+3 -3

0 comment

2 changed files

dependabot[bot]

pr closed time in 23 days

push eventfrioux/leatherman

dependabot[bot]

commit sha 34f982b5a8f6b74c3c0c848d00d36bd2b8b21afc

build(deps): bump github.com/yuin/goldmark from 1.3.9 to 1.4.0 Bumps [github.com/yuin/goldmark](https://github.com/yuin/goldmark) from 1.3.9 to 1.4.0. - [Release notes](https://github.com/yuin/goldmark/releases) - [Commits](https://github.com/yuin/goldmark/compare/v1.3.9...v1.4.0) --- updated-dependencies: - dependency-name: github.com/yuin/goldmark dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com>

view details

push time in 23 days

push eventfrioux/clog

Arthur Axel 'fREW' Schmidt

commit sha 201eae2254a5abf3b45cd0782b4642311c700730

coffee

view details

push time in 23 days

push eventfrioux/leatherman

dependabot[bot]

commit sha 6dae1ea7472e9485bbd8a024b01638f80281b4fe

build(deps): bump modernc.org/sqlite from 1.11.1 to 1.11.2 Bumps [modernc.org/sqlite](https://gitlab.com/cznic/sqlite) from 1.11.1 to 1.11.2. - [Release notes](https://gitlab.com/cznic/sqlite/tags) - [Commits](https://gitlab.com/cznic/sqlite/compare/v1.11.1...v1.11.2) --- updated-dependencies: - dependency-name: modernc.org/sqlite dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com>

view details

push time in a month

PR merged frioux/leatherman

build(deps): bump modernc.org/sqlite from 1.11.1 to 1.11.2 dependencies

Bumps modernc.org/sqlite from 1.11.1 to 1.11.2. <details> <summary>Commits</summary> <ul> <li><a href="https://gitlab.com/cznic/sqlite/commit/fbc07fb841defec228c94f32ef0a3f98f9440b34"><code>fbc07fb</code></a> windows/amd64: add note about experimental status</li> <li>See full diff in <a href="https://gitlab.com/cznic/sqlite/compare/v1.11.1...v1.11.2">compare view</a></li> </ul> </details> <br />

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


<details> <summary>Dependabot commands and options</summary> <br />

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

</details>

+3 -3

0 comment

2 changed files

dependabot[bot]

pr closed time in a month