Error connection error 401 permission denied invalid pve ticket

Tried to create a 3 node cluster with a fresh proxmox ve 6.0-4 install. Cluster creation works and adding a second node works aswell, but after i added the 3rd node i get "permission denied - invalid PVE ticket (401)" (only for the third the other 2 are still working). In the webinterface i...

  • #1

Tried to create a 3 node cluster with a fresh proxmox ve 6.0-4 install.
Cluster creation works and adding a second node works aswell, but after i added the 3rd node i get «permission denied — invalid PVE ticket (401)» (only for the third the other 2 are still working).

In the webinterface i can access Node 1 and 2, but 3 aborts with this message. Node 3 can’t access any node.

Dominic


  • #2

Did you try clearing your browser cache or using a different browser?

  • #3

Did you try clearing your browser cache or using a different browser?

yes to both

What i tried until now:
-use another browser/workstation to access
-separate the 3rd node and use delnode on the other clients then readd
-tried the above and before readd i cleared all reverences i could find on the 2 working nodes
-checked timedatectl and synced the time and timezone between all nodes
-reinstalled node 3 & synced the time and added it to the cluster again (before i cleared all references from the other nodes)

Nothing of this worked. After «pvecm add ip-of-the-first-node» it says successful and the webpanel shows the node in the cluster with it’s local and local lvm. When i expand it i get «permission denied — invalid PVE ticket (401)»…

No idea what i should try next.

  • #4

crazy, reinstalled all 3 nodes and now it worked

  • #5

Same thing is happening to me too. Fourth cluster I’ve built, but first time using the GUI and separate corosync network to do so (now with 6.0.4)

Hosts can all ping one-another on corosync network, and all went fine until joining node #2 and #3 via GUI.

Is the corosync cluster network supposed to be able to reach the NTP server directly from that separate network?

EDIT: more detail:

2/3 nodes seem to be ok. The 3rd node has joined the cluster and is visible in the other 2 nodes management windows via web UI.

Node 3 asks for login each time it is visited. Nothing works from this node’s web UI, but it does believe it is joined to the cluster (node 1 and 2 are visible, but clicking anything throws errors…401: no ticket in shell, and «NaN» repeatedly in other fields within the cluster management).

  • #6

Same thing is happening to me too. Fourth cluster I’ve built, but first time using the GUI and separate corosync network to do so (now with 6.0.4)

Hosts can all ping one-another on corosync network, and all went fine until joining node #2 and #3 via GUI.

Is the corosync cluster network supposed to be able to reach the NTP server directly from that separate network?

EDIT: more detail:

2/3 nodes seem to be ok. The 3rd node has joined the cluster and is visible in the other 2 nodes management windows via web UI.

Node 3 asks for login each time it is visited. Nothing works from this node’s web UI, but it does believe it is joined to the cluster (node 1 and 2 are visible, but clicking anything throws errors…401: no ticket in shell, and «NaN» repeatedly in other fields within the cluster management).

For anyone else knocking about with this…
Seem to have solved it for now. Still not sure why the error happened during cluster creation!

1.)

Code:

pvecm updatecerts
systemctl restart pvedaemon pveproxy

2.) restarted nodes.
3.) cleared browser cookies for all three nodes.

..still had the errors, until the web browser itself was purged of cache, closed and restarted.

  • #7

Browser doesn’t seem to be the issue. I am still trying to fix this. It reoccurs for us every 10 days or so.

  • #8

Browser doesn’t seem to be the issue. I am still trying to fix this. It reoccurs for us every 10 days or so.

OMG, i am not the only one ! Got a cluster with 3 nodes and the error came randomly, sometimes 2/3 days sometimes more.

  • #9

Helllo i was haveing the same problem the way i fixed it is :
1. Deleting this files:
<node> is your node name

  • /etc/pve/pve-root-ca.pem
  • /etc/pve/priv/pve-root-ca.key
  • /etc/pve/nodes/<node>/pve-ssl.pem
  • /etc/pve/nodes/<node>/pve-ssl.key
  • /etc/pve/authkey.pub
  • /etc/pve/priv/authkey.key
  • /etc/pve/priv/authorized_keys

2. pvecm updatecerts -f
3 systemctl restart pvedaemon pveproxy

Hope it works for others too

  • #10

Please also verify that your host clocks are synced

  • #11

We have a 3 node setup. The lesser 3rd node, only used only as a replication target for backup puporses, still has the issue.

We have since purchased the license subscription and are currently running Virtual Environment 6.1-8. Since the affected node can be rebooted, I have added this as a daily cron job and the problem is worked around this way. I have disabled the reboot last week and today, the node is not reachable from the others with error «permission denied — invalid PVE ticket (401)».

Proxmox, fix this.

  • #12

My clocks are in sync… when I observed them.

This could be a good clue still. My 2 main nodes are bare-metal, but my 3rd node is a VM (bhyve). Maybe the host’s network timesync is messing with the date periodically? Anyone can put some weight on this?

  • #13

My clocks are in sync… when I observed them.

This could be a good clue still. My 2 main nodes are bare-metal, but my 3rd node is a VM (bhyve). Maybe the host’s network timesync is messing with the date periodically? Anyone can put some weight on this?

A late reply perhaps, but I think you might be on to something, as I’ve come across this before. In many cases the default value for a VM is to get its
time synced with the parent partition, eg. from the host it’s running on. Make sure this is not the case for you 3rd node and that all of your hosts are
using the same time source.

  • #14

I solved this by setting up the same NTP server on all servers.

  • #15

Helllo i was haveing the same problem the way i fixed it is :
1. Deleting this files:
<node> is your node name

  • /etc/pve/pve-root-ca.pem
  • /etc/pve/priv/pve-root-ca.key
  • /etc/pve/nodes/<node>/pve-ssl.pem
  • /etc/pve/nodes/<node>/pve-ssl.key
  • /etc/pve/authkey.pub
  • /etc/pve/priv/authkey.key
  • /etc/pve/priv/authorized_keys

2. pvecm updatecerts -f
3 systemctl restart pvedaemon pveproxy

Hope it works for others too

Can confirm this works. My cluster had this 401 issue on all nodes (not just one), I had tried ntp and pvecm updatecerts and reboot the whole cluster but all failed. I ended up fixing this using this method, and replace pve-ssl cert on all nodes. Thanks skywyw.

  • #16

Helllo i was haveing the same problem the way i fixed it is :
1. Deleting this files:
<node> is your node name

  • /etc/pve/pve-root-ca.pem
  • /etc/pve/priv/pve-root-ca.key
  • /etc/pve/nodes/<node>/pve-ssl.pem
  • /etc/pve/nodes/<node>/pve-ssl.key
  • /etc/pve/authkey.pub
  • /etc/pve/priv/authkey.key
  • /etc/pve/priv/authorized_keys

2. pvecm updatecerts -f
3 systemctl restart pvedaemon pveproxy

Hope it works for others too

@skywyw which node(s) should i run these commands on?

i have a 4 node cluster and 1 is giving me the «permission denied — invalid PVE ticket (401)» error.

and do i remove pve-ssl.pem & pve-ssl.key for just the one that’s having trouble or all nodes?

Last edited: Jan 10, 2021

  • #17

I had the same problem and it turned out the new node had a faulty DNS server entry. Fixing that resolved the issue.

  • #18

@skywyw which node(s) should i run these commands on?

i have a 4 node cluster and 1 is giving me the «permission denied — invalid PVE ticket (401)» error.

and do i remove pve-ssl.pem & pve-ssl.key for just the one that’s having trouble or all nodes?

I have similar problem. 5 nodes and only one is giving me the «permission denied — invalid PVE ticket (401)» error.
Have you any solution? I tried set up same ntp server on all the nodes — did not help.

It’s production servers, so I cant reboot them as it would suit me.

  • #19

I solved this by setting up the same NTP server on all servers.

thanks.its working

  • #20

Hi

I know this is a rather old thread but it might help people come across … I encountered the same error as mentioned on a fresh installed 3 node Proxmox VE cluster.

When switching from one node to the other in webgui the 401 error came up — as it is a testing cluster which is hibernated from time to time I realized following points:

— after suspending and waking up the machines there may be a time difference and according logfiles some actions do not tolerate a difference of more than one second
— the browser must know about all certificates and have them accepted if using self signed certs (login with all addresses of all nodes)
— browser cache should be cleared
— and storing username / pw may help (but for a production cluster I would not recommend this)

Regards, Dietmar

Содержание

  1. REST API: 401 Unauthorized: permission denied — invalid PVE ticket
  2. jan.svoboda
  3. Connection error 401: permission denied — invalid PVE ticket
  4. dcsapak
  5. dcsapak
  6. 4ps4all
  7. permission denied — invalid PVE ticket (401)
  8. T.Herrmann
  9. hallaji
  10. willprox
  11. chengkinhung
  12. RolandK
  13. RolandK
  14. 3 Node cluster «permission denied — invalid PVE ticket (401)»
  15. BugProgrammer
  16. Dominic
  17. BugProgrammer
  18. BugProgrammer
  19. hibouambigu
  20. hibouambigu

jan.svoboda

Member

I have some problems with authentication ticket. I know that there are multiple threads about the same issue. I tried to follow steps from those threads that worked for someone but nothing worked for me.
I use the REST API very extensively so it is crucial to have it working. I use proxmoxer 1.1.0 Python 3 library with HTTPS backend as a wrapper to the REST API.

I get HTTP 401 Unauthorized: permission denied — invalid PVE ticket while running a Python script using REST API very often and before the ticket’s 2 hours lifetime. Sometimes it doesn’t obtain the ticket at all.
It happens a few times a week. I searched for some clues in logs but I have never found the reason why this happens.

All nodes are NTP synchronized, I use the default systemd-timesyncd.

I would be very thankful if someone looks to attached logs and advise me what should I reconfigure or which other clues I should search for.

I have 5 Proxmox nodes in one cluster, all of them running the same version Proxmox VE 6.1-8.

# pveversion -v
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 7.4-1
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4

pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

# timedatectl
Local time: Wed 2020-06-17 10:43:41 CEST
Universal time: Wed 2020-06-17 08:43:41 UTC
RTC time: Wed 2020-06-17 08:43:41
Time zone: Europe/Prague (CEST, +0200)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no

# systemctl status -l systemd-timesyncd
● systemd-timesyncd.service — Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /usr/lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Mon 2020-06-15 15:25:07 CEST; 1 day 19h ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 4020 (systemd-timesyn)
Status: «Synchronized to time server for the first time 89.221.212.46:123 (0.debian.pool.ntp.org).»
Tasks: 2 (limit: 7372)
Memory: 6.5M
CGroup: /system.slice/systemd-timesyncd.service
└─4020 /lib/systemd/systemd-timesyncd

Jun 15 15:25:06 devel1 systemd[1]: Starting Network Time Synchronization.
Jun 15 15:25:07 devel1 systemd[1]: Started Network Time Synchronization.
Jun 15 15:25:07 devel1 systemd-timesyncd[4020]: Synchronized to time server for the first time 89.221.210.188:123 (0.debian.pool.ntp.org).
Jun 15 15:44:31 devel1 systemd-timesyncd[4020]: Timed out waiting for reply from 89.221.210.188:123 (0.debian.pool.ntp.org).
Jun 15 15:44:31 devel1 systemd-timesyncd[4020]: Synchronized to time server for the first time 89.221.212.46:123 (0.debian.pool.ntp.org).

# systemctl status -l corosync
● corosync.service — Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-06-15 15:46:01 CEST; 1 day 18h ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 8412 (corosync)
Tasks: 9 (limit: 7372)
Memory: 147.6M
CGroup: /system.slice/corosync.service
└─8412 /usr/sbin/corosync -f

Jun 15 15:46:24 devel1 corosync[8412]: [KNET ] rx: host: 4 link: 0 is up
Jun 15 15:46:24 devel1 corosync[8412]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Jun 15 15:46:25 devel1 corosync[8412]: [TOTEM ] A new membership (1.1860e) was formed. Members joined: 4
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [QUORUM] Members[5]: 1 2 3 4 5
Jun 15 15:46:25 devel1 corosync[8412]: [MAIN ] Completed service synchronization, ready to provide service.

# systemctl status -l pveproxy
● pveproxy.service — PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-06-16 15:51:44 CEST; 18h ago
Process: 16677 ExecStartPre=/usr/bin/pvecm updatecerts —silent (code=exited, status=0/SUCCESS)
Process: 16686 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
Process: 18296 ExecReload=/usr/bin/pveproxy restart (code=exited, status=0/SUCCESS)
Main PID: 16688 (pveproxy)
Tasks: 5 (limit: 7372)
Memory: 288.8M
CGroup: /system.slice/pveproxy.service
├─ 1561 pveproxy worker
├─ 1812 pveproxy worker
├─ 2197 pveproxy worker
├─16688 pveproxy
└─36856 pveproxy worker (shutdown)

Jun 17 10:38:49 devel1 pveproxy[16688]: starting 1 worker(s)
Jun 17 10:38:49 devel1 pveproxy[16688]: worker 1812 started
Jun 17 10:38:49 devel1 pveproxy[1812]: Clearing outdated entries from certificate cache
Jun 17 10:39:31 devel1 pveproxy[48057]: worker exit
Jun 17 10:40:48 devel1 pveproxy[16688]: worker 1386 finished
Jun 17 10:40:48 devel1 pveproxy[16688]: starting 1 worker(s)
Jun 17 10:40:48 devel1 pveproxy[16688]: worker 2197 started
Jun 17 10:40:49 devel1 pveproxy[2196]: got inotify poll request in wrong process — disabling inotify
Jun 17 10:40:49 devel1 pveproxy[2197]: Clearing outdated entries from certificate cache
Jun 17 10:40:50 devel1 pveproxy[2196]: worker exit

# systemctl status -l pvedaemon
● pvedaemon.service — PVE API Daemon
Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-06-16 15:51:50 CEST; 18h ago
Process: 16725 ExecStart=/usr/bin/pvedaemon start (code=exited, status=0/SUCCESS)
Main PID: 16728 (pvedaemon)
Tasks: 6 (limit: 7372)
Memory: 298.7M
CGroup: /system.slice/pvedaemon.service
├─ 827 pvedaemon worker
├─16728 pvedaemon
├─36835 task UPID:devel1:00008FE3:1265C8BA:5EE9C7CB:vncproxy:261:zima@ldap:
├─36837 /usr/bin/perl /usr/sbin/qm vncproxy 261
├─47263 pvedaemon worker
└─48069 pvedaemon worker

Jun 17 10:40:56 devel1 pvedaemon[47263]: successful auth for user ‘icinga@pve’
Jun 17 10:41:01 devel1 pvedaemon[47263]: successful auth for user ‘icinga@pve’
Jun 17 10:41:04 devel1 pvedaemon[48069]: successful auth for user ‘icinga@pve’
Jun 17 10:41:04 devel1 pvedaemon[827]: successful auth for user ‘icinga@pve’
Jun 17 10:41:14 devel1 pvedaemon[827]: successful auth for user ‘icinga@pve’
Jun 17 10:41:22 devel1 pvedaemon[47263]: successful auth for user ‘icinga@pve’
Jun 17 10:41:39 devel1 pvedaemon[48069]: successful auth for user ‘ @ldap’
Jun 17 10:41:50 devel1 pvedaemon[827]: successful auth for user ‘ @ldap’
Jun 17 10:41:50 devel1 pvedaemon[48069]: successful auth for user ‘icinga@pve’
Jun 17 10:41:52 devel1 pvedaemon[47263]: successful auth for user ‘icinga@pve’

Источник

Connection error 401: permission denied — invalid PVE ticket

Active Member

HI,
since a couple of hours I do get this message at the proxmox gui on cluster host 1
but on cluster host 5 everything is fine.

What could be the reason and which steps have to be done to narrow it down

I looked into pveproxy where I guess are the pve tickets from and saw some

Nov 11 13:15:51 prox01 pveproxy[18082]: got inotify poll request in wrong process — disabling inotify

and a few
Nov 11 13:20:39 prox01 pveproxy[8722]: 2020-11-11 13:20:39.497791 +0100 error AnyEvent::Util: Runtime error in AnyEvent::guard callback: Can’t call method «_put_session» on an undefined value at /usr/lib/x86_64-linux-gnu/perl5/5.28/AnyEvent/Handle.pm line 2259 during global destruction.

Active Member

dcsapak

Proxmox Staff Member

Best regards,
Dominik

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

Active Member

# timedatectl timesync-status
Server: ntp1
Poll interval: 34min 8s (min: 32s; max 34min 8s)
Leap: normal
Version: 4
Stratum: 1
Reference: PPS
Precision: 2us (-19)
Root distance: 442us (max: 5s)
Offset: +1.527ms
Delay: 3.383ms
Jitter: 3.488ms
Packet count: 5068
Frequency: +20,113ppm

root@prox01:/var/log# timedatectl timesync-status
Server: ntp1
Poll interval: 34min 8s (min: 32s; max 34min 8s)
Leap: normal
Version: 4
Stratum: 1
Reference: PPS
Precision: 2us (-19)
Root distance: 411us (max: 5s)
Offset: +39us
Delay: 5.539ms
Jitter: 2.028ms
Packet count: 5070
Frequency: +2,812ppm

dcsapak

Proxmox Staff Member

also the time of the client (browser)?

anything else in the syslog?

Best regards,
Dominik

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

4ps4all

New Member

I’m getting same problem since today, I can’t login through proxmox gui in a single proxmox node (ssh works).
I used lastly pve-promox-backup.iso in another proxmox node, but the two nodes are not in a cluster.
The single node I can’t login through proxmox gui should have done vm and ct backups in the other node, I hope so.

pve version:
pve-manager/6.2-15/48bd51b6 (running kernel: 5.4.65-1-pve)

Источник

permission denied — invalid PVE ticket (401)

T.Herrmann

Active Member

Maybe but please test it. My experience with this «Round up» order was good until now.

service pve-cluster restart && service pvedaemon restart && service pvestatd restart && service pveproxy restart

service pvedaemon restart && service pveproxy restart

hallaji

New Member

willprox

New Member

delete authkey.pub then restart that nodes have problems.

chengkinhung

Active Member

Hi, I just encounter this issue in PVE 6.4-15, I have 5 nodes in cluster, found only one node got «permission denied», can not read stat from all the other nodes, I can till login this node directly, but can not read all the other nodes’s stata from this node too. So I check the pveproxy service and found it was not working well, so I just restart the pveproxy service on this node and solve this issue:

RolandK

Active Member

i also get intermittend «Connection error 401: permission denied — invalid PVE ticket» in our 6.4 cluster

typically, this goes away after a while without any action.

how can i debug this ?

RolandK

Active Member

apparently, the authkey.pub had been changed

the problem went away after doing «touch /etc/pve/authkey.pub»

what was going wrong here ?

i’d be interested why an outdated timestamp on authkey.pub causing intermittend «invalid pve ticket» tickets and logouts in browser.

you can do some action and then another action fails. connecting to console of a VM works and then fails again. reloading browser window fixes it for some action and short after, you get invalid pve ticket again.

why does a simple timestamp update on that file fix such issue?

why does browser behave so weird and non-forseeable?

for me looks a little like buggy behaviour, espectially because it happens out of nowhere. (i have no timesync problem and i did not find any errors in the logs)

Источник

3 Node cluster «permission denied — invalid PVE ticket (401)»

BugProgrammer

New Member

Tried to create a 3 node cluster with a fresh proxmox ve 6.0-4 install.
Cluster creation works and adding a second node works aswell, but after i added the 3rd node i get «permission denied — invalid PVE ticket (401)» (only for the third the other 2 are still working).

In the webinterface i can access Node 1 and 2, but 3 aborts with this message. Node 3 can’t access any node.

Dominic

Proxmox Retired Staff

Best regards,
Dominic

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

BugProgrammer

New Member

What i tried until now:
-use another browser/workstation to access
-separate the 3rd node and use delnode on the other clients then readd
-tried the above and before readd i cleared all reverences i could find on the 2 working nodes
-checked timedatectl and synced the time and timezone between all nodes
-reinstalled node 3 & synced the time and added it to the cluster again (before i cleared all references from the other nodes)

Nothing of this worked. After «pvecm add ip-of-the-first-node» it says successful and the webpanel shows the node in the cluster with it’s local and local lvm. When i expand it i get «permission denied — invalid PVE ticket (401)».

No idea what i should try next.

BugProgrammer

New Member

hibouambigu

Member

Same thing is happening to me too. Fourth cluster I’ve built, but first time using the GUI and separate corosync network to do so (now with 6.0.4)

Hosts can all ping one-another on corosync network, and all went fine until joining node #2 and #3 via GUI.

Is the corosync cluster network supposed to be able to reach the NTP server directly from that separate network?

EDIT: more detail:

2/3 nodes seem to be ok. The 3rd node has joined the cluster and is visible in the other 2 nodes management windows via web UI.

Node 3 asks for login each time it is visited. Nothing works from this node’s web UI, but it does believe it is joined to the cluster (node 1 and 2 are visible, but clicking anything throws errors. 401: no ticket in shell, and «NaN» repeatedly in other fields within the cluster management).

hibouambigu

Member

Same thing is happening to me too. Fourth cluster I’ve built, but first time using the GUI and separate corosync network to do so (now with 6.0.4)

Hosts can all ping one-another on corosync network, and all went fine until joining node #2 and #3 via GUI.

Is the corosync cluster network supposed to be able to reach the NTP server directly from that separate network?

EDIT: more detail:

2/3 nodes seem to be ok. The 3rd node has joined the cluster and is visible in the other 2 nodes management windows via web UI.

Node 3 asks for login each time it is visited. Nothing works from this node’s web UI, but it does believe it is joined to the cluster (node 1 and 2 are visible, but clicking anything throws errors. 401: no ticket in shell, and «NaN» repeatedly in other fields within the cluster management).

For anyone else knocking about with this.
Seem to have solved it for now. Still not sure why the error happened during cluster creation!

2.) restarted nodes.
3.) cleared browser cookies for all three nodes.

..still had the errors, until the web browser itself was purged of cache, closed and restarted.

Источник

The problem

Exactly two hours after restarting HA, Proxmox integration no longer works with error: 401 Unauthorized: permission denied — invalid PVE ticket

Environment

  • Home Assistant Core release with the issue: 0.111.3
  • Last working Home Assistant Core release (if known): 0.110.x
  • Operating environment (Home Assistant/Supervised/Docker/venv): Home Assistant
  • Integration causing this issue: Proxmox VE
  • Link to integration documentation on our website: https://www.home-assistant.io/integrations/proxmoxve/

Problem-relevant configuration.yaml

proxmoxve:
  - host: <ip address>
    username: <user>
    password: <pwd>
    verify_ssl: false
    realm: pam
    nodes:
      - node: pve
        vms:
          - 100
          - 102
          - 103
        containers:
          - 101

Traceback/Error logs

Logger: homeassistant.helpers.entity
Source: components/proxmoxve/binary_sensor.py:96
First occurred: 17:40:58 (1026 occurrences)
Last logged: 19:53:10

Update for binary_sensor.pve_hassio_running fails
Update for binary_sensor.pve_omv_running fails
Update for binary_sensor.pve_hassio_test_running fails
Update for binary_sensor.pve_lamp_running fails
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 279, in async_update_ha_state
    await self.async_device_update()
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 472, in async_device_update
    await self.hass.async_add_executor_job(self.update)
  File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/src/homeassistant/homeassistant/components/proxmoxve/binary_sensor.py", line 83, in update
    item = self.poll_item()
  File "/usr/src/homeassistant/homeassistant/components/proxmoxve/binary_sensor.py", line 96, in poll_item
    .get(self._item_type.name)
  File "/usr/local/lib/python3.7/site-packages/proxmoxer/core.py", line 105, in get
    return self(args)._request("GET", params=params)
  File "/usr/local/lib/python3.7/site-packages/proxmoxer/core.py", line 94, in _request
    resp.reason, resp.content))
proxmoxer.core.ResourceException: 401 Unauthorized: permission denied - invalid PVE ticket - b''

Additional information

HassOS is a proxmox virtual machine

Skip to content

Description

After logged into the web front end, PVE constantly asking me to login again.

Since it’s impossible to stay login, I can’t upload big ISO image(like Windows), a window says Permission denied (invalid ticket 401) will popup during the process.

After some searching in PVE forum, I found out this is a system time issue. Execute the command

journalctl -u pvedaemon

to check pvedaemon journal, it shows the system start time is 8 hours behind the current time.

Reference

  • proxmox安装后的初始化工作 — 设置服务器时间
  • 轻松解决Linux+Windows双系统时间不一致问题

Solution

I found two solutions, one works(for me), another doesn’t.

Solution 1

Install ntpdate to sync time to a ntp server(which didn’t help me).

  1. Install ntpdate
    apt install ntp ntpdate
  2. Sync time
    ntpdate -u ntp.aliyun.com
    # you can use other ntp server, like time.windows.com

Solution 2

Set the motherboard bios time(or RTC, Rea-Time Clock) as the linux standard local time.

  1. Execute command
    timedatectl set-local-rtc 1
    hwclock --localtime --systohc

The final result

The local time is the same as the RTC time, and the universal time is different.

Connection error 401: permission denied — invalid PVE ticket

Active Member

HI,
since a couple of hours I do get this message at the proxmox gui on cluster host 1
but on cluster host 5 everything is fine.

What could be the reason and which steps have to be done to narrow it down

I looked into pveproxy where I guess are the pve tickets from and saw some

Nov 11 13:15:51 prox01 pveproxy[18082]: got inotify poll request in wrong process — disabling inotify

and a few
Nov 11 13:20:39 prox01 pveproxy[8722]: 2020-11-11 13:20:39.497791 +0100 error AnyEvent::Util: Runtime error in AnyEvent::guard callback: Can’t call method «_put_session» on an undefined value at /usr/lib/x86_64-linux-gnu/perl5/5.28/AnyEvent/Handle.pm line 2259 during global destruction.

Active Member

dcsapak

Proxmox Staff Member

Best regards,
Dominik

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

Active Member

# timedatectl timesync-status
Server: ntp1
Poll interval: 34min 8s (min: 32s; max 34min 8s)
Leap: normal
Version: 4
Stratum: 1
Reference: PPS
Precision: 2us (-19)
Root distance: 442us (max: 5s)
Offset: +1.527ms
Delay: 3.383ms
Jitter: 3.488ms
Packet count: 5068
Frequency: +20,113ppm

root@prox01:/var/log# timedatectl timesync-status
Server: ntp1
Poll interval: 34min 8s (min: 32s; max 34min 8s)
Leap: normal
Version: 4
Stratum: 1
Reference: PPS
Precision: 2us (-19)
Root distance: 411us (max: 5s)
Offset: +39us
Delay: 5.539ms
Jitter: 2.028ms
Packet count: 5070
Frequency: +2,812ppm

dcsapak

Proxmox Staff Member

also the time of the client (browser)?

anything else in the syslog?

Best regards,
Dominik

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

4ps4all

New Member

I’m getting same problem since today, I can’t login through proxmox gui in a single proxmox node (ssh works).
I used lastly pve-promox-backup.iso in another proxmox node, but the two nodes are not in a cluster.
The single node I can’t login through proxmox gui should have done vm and ct backups in the other node, I hope so.

pve version:
pve-manager/6.2-15/48bd51b6 (running kernel: 5.4.65-1-pve)

Источник

permission denied — invalid PVE ticket (401)

T.Herrmann

Active Member

Maybe but please test it. My experience with this «Round up» order was good until now.

service pve-cluster restart && service pvedaemon restart && service pvestatd restart && service pveproxy restart

service pvedaemon restart && service pveproxy restart

hallaji

New Member

willprox

New Member

delete authkey.pub then restart that nodes have problems.

chengkinhung

Active Member

Hi, I just encounter this issue in PVE 6.4-15, I have 5 nodes in cluster, found only one node got «permission denied», can not read stat from all the other nodes, I can till login this node directly, but can not read all the other nodes’s stata from this node too. So I check the pveproxy service and found it was not working well, so I just restart the pveproxy service on this node and solve this issue:

RolandK

Active Member

i also get intermittend «Connection error 401: permission denied — invalid PVE ticket» in our 6.4 cluster

typically, this goes away after a while without any action.

how can i debug this ?

RolandK

Active Member

apparently, the authkey.pub had been changed

the problem went away after doing «touch /etc/pve/authkey.pub»

what was going wrong here ?

i’d be interested why an outdated timestamp on authkey.pub causing intermittend «invalid pve ticket» tickets and logouts in browser.

you can do some action and then another action fails. connecting to console of a VM works and then fails again. reloading browser window fixes it for some action and short after, you get invalid pve ticket again.

why does a simple timestamp update on that file fix such issue?

why does browser behave so weird and non-forseeable?

for me looks a little like buggy behaviour, espectially because it happens out of nowhere. (i have no timesync problem and i did not find any errors in the logs)

Источник

3 Node cluster «permission denied — invalid PVE ticket (401)»

BugProgrammer

New Member

Tried to create a 3 node cluster with a fresh proxmox ve 6.0-4 install.
Cluster creation works and adding a second node works aswell, but after i added the 3rd node i get «permission denied — invalid PVE ticket (401)» (only for the third the other 2 are still working).

In the webinterface i can access Node 1 and 2, but 3 aborts with this message. Node 3 can’t access any node.

Dominic

Proxmox Retired Staff

Best regards,
Dominic

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

BugProgrammer

New Member

What i tried until now:
-use another browser/workstation to access
-separate the 3rd node and use delnode on the other clients then readd
-tried the above and before readd i cleared all reverences i could find on the 2 working nodes
-checked timedatectl and synced the time and timezone between all nodes
-reinstalled node 3 & synced the time and added it to the cluster again (before i cleared all references from the other nodes)

Nothing of this worked. After «pvecm add ip-of-the-first-node» it says successful and the webpanel shows the node in the cluster with it’s local and local lvm. When i expand it i get «permission denied — invalid PVE ticket (401)».

No idea what i should try next.

BugProgrammer

New Member

hibouambigu

Member

Same thing is happening to me too. Fourth cluster I’ve built, but first time using the GUI and separate corosync network to do so (now with 6.0.4)

Hosts can all ping one-another on corosync network, and all went fine until joining node #2 and #3 via GUI.

Is the corosync cluster network supposed to be able to reach the NTP server directly from that separate network?

EDIT: more detail:

2/3 nodes seem to be ok. The 3rd node has joined the cluster and is visible in the other 2 nodes management windows via web UI.

Node 3 asks for login each time it is visited. Nothing works from this node’s web UI, but it does believe it is joined to the cluster (node 1 and 2 are visible, but clicking anything throws errors. 401: no ticket in shell, and «NaN» repeatedly in other fields within the cluster management).

hibouambigu

Member

Same thing is happening to me too. Fourth cluster I’ve built, but first time using the GUI and separate corosync network to do so (now with 6.0.4)

Hosts can all ping one-another on corosync network, and all went fine until joining node #2 and #3 via GUI.

Is the corosync cluster network supposed to be able to reach the NTP server directly from that separate network?

EDIT: more detail:

2/3 nodes seem to be ok. The 3rd node has joined the cluster and is visible in the other 2 nodes management windows via web UI.

Node 3 asks for login each time it is visited. Nothing works from this node’s web UI, but it does believe it is joined to the cluster (node 1 and 2 are visible, but clicking anything throws errors. 401: no ticket in shell, and «NaN» repeatedly in other fields within the cluster management).

For anyone else knocking about with this.
Seem to have solved it for now. Still not sure why the error happened during cluster creation!

2.) restarted nodes.
3.) cleared browser cookies for all three nodes.

..still had the errors, until the web browser itself was purged of cache, closed and restarted.

Источник

Proxmox VE integration — 401 Unauthorized: permission denied invalid PVE ticket #36853

Comments

maxalbani commented Jun 16, 2020 •

The problem

Exactly two hours after restarting HA, Proxmox integration no longer works with error: 401 Unauthorized: permission denied — invalid PVE ticket

Environment

  • Home Assistant Core release with the issue: 0.111.3
  • Last working Home Assistant Core release (if known): 0.110.x
  • Operating environment (Home Assistant/Supervised/Docker/venv): Home Assistant
  • Integration causing this issue: Proxmox VE
  • Link to integration documentation on our website: https://www.home-assistant.io/integrations/proxmoxve/

Problem-relevant configuration.yaml

verify_ssl: false realm: pam nodes: — node: pve vms: — 100 — 102 — 103 containers: — 101″>

Traceback/Error logs

Additional information

HassOS is a proxmox virtual machine

The text was updated successfully, but these errors were encountered:

probot-home-assistant bot commented Jun 16, 2020

Hey there @k4ds3, @jhollowe, mind taking a look at this issue as its been labeled with a integration ( proxmoxve ) you are listed as a codeowner for? Thanks!
(message by CodeOwnersMention)

probot-home-assistant bot commented Jun 16, 2020

maxalbani commented Jun 17, 2020

I tried in a test installation with version 0.110 and the integration works correctly renewing the ticket after 2 hours.
So it is assumed that version 0.111 has corrupted the integration.
@k4ds3, @jhollowe any ideas?

StealthChesnut commented Jun 17, 2020 •

Same thing here. Rebooted my HomeAssistant install at 14:11 yesterday, at 16:11 I’m getting the exact same errors in the logs as @maxalbani but with different binary_sensor.names:

last line of each entry is the same 401:
proxmoxer.core.ResourceException: 401 Unauthorized: permission denied — invalid PVE ticket — b»

I am on HomeAssistant 0.111.3

jhollowe commented Jun 17, 2020

Interesting. How often is HA polling your PVE? the proxmoxer library should be renewing the ticket as long as it is polling at least once every two hours.

maxalbani commented Jun 17, 2020

Interesting. How often is HA polling your PVE? the proxmoxer library should be renewing the ticket as long as it is polling at least once every two hours.

With the ticket valid in the first two hours, HA updates the sensors about every 30 seconds, so that’s not the problem.
Something has changed with the 0.111.x version of HA, because everything worked correctly up to 0.110.

jhollowe commented Jun 17, 2020

I’ll look at it after work today.
I’m planning to add API token authentication for this integration to try to alleviate this problem.

maxalbani commented Jun 17, 2020

jhollowe commented Jun 18, 2020

The authentication renewal was removed from the integration because it is now handled by the connection library. I’m running hass with a debug build of the library to see where the issue is.

For now I would recommend reverting to 0.110 if you need this working currently. You could also manually re-add the integration’s old renewal code if you want to use 0.111

maxalbani commented Jun 18, 2020

For now I would recommend reverting to 0.110 if you need this working currently. You could also manually re-add the integration’s old renewal code if you want to use 0.111

How can I manually re-add old renewal code on Hassio?
Thank’s for your job!

jhollowe commented Jun 19, 2020

@maxalbani I’m not sure. With hassio (Home Assistant), it might be hard to do.

maxalbani commented Jun 19, 2020

@maxalbani I’m not sure. With hassio (Home Assistant), it might be hard to do.

I thought so too .
Do you have a forecast on solving the problem?

jhollowe commented Jun 19, 2020 •

It looks like something weird is happening with the library renewing the ticket. I think it is an issue with the library working within the async worker threads, so each thread is trying to renew and PVE is not liking it. I’ve got a test running now with only one container being polled. If that doesn’t fail, I will know what the issue is. I just have to wait 2 hours every time I change something.
I can just revert back in the integration’s renewal code, but I would like to try to get it working with the library’s built-in renewal.

Источник

REST API: 401 Unauthorized: permission denied — invalid PVE ticket

jan.svoboda

Member

I have some problems with authentication ticket. I know that there are multiple threads about the same issue. I tried to follow steps from those threads that worked for someone but nothing worked for me.
I use the REST API very extensively so it is crucial to have it working. I use proxmoxer 1.1.0 Python 3 library with HTTPS backend as a wrapper to the REST API.

I get HTTP 401 Unauthorized: permission denied — invalid PVE ticket while running a Python script using REST API very often and before the ticket’s 2 hours lifetime. Sometimes it doesn’t obtain the ticket at all.
It happens a few times a week. I searched for some clues in logs but I have never found the reason why this happens.

All nodes are NTP synchronized, I use the default systemd-timesyncd.

I would be very thankful if someone looks to attached logs and advise me what should I reconfigure or which other clues I should search for.

I have 5 Proxmox nodes in one cluster, all of them running the same version Proxmox VE 6.1-8.

# pveversion -v
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 7.4-1
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4

pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

# timedatectl
Local time: Wed 2020-06-17 10:43:41 CEST
Universal time: Wed 2020-06-17 08:43:41 UTC
RTC time: Wed 2020-06-17 08:43:41
Time zone: Europe/Prague (CEST, +0200)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no

# systemctl status -l systemd-timesyncd
● systemd-timesyncd.service — Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /usr/lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Mon 2020-06-15 15:25:07 CEST; 1 day 19h ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 4020 (systemd-timesyn)
Status: «Synchronized to time server for the first time 89.221.212.46:123 (0.debian.pool.ntp.org).»
Tasks: 2 (limit: 7372)
Memory: 6.5M
CGroup: /system.slice/systemd-timesyncd.service
└─4020 /lib/systemd/systemd-timesyncd

Jun 15 15:25:06 devel1 systemd[1]: Starting Network Time Synchronization.
Jun 15 15:25:07 devel1 systemd[1]: Started Network Time Synchronization.
Jun 15 15:25:07 devel1 systemd-timesyncd[4020]: Synchronized to time server for the first time 89.221.210.188:123 (0.debian.pool.ntp.org).
Jun 15 15:44:31 devel1 systemd-timesyncd[4020]: Timed out waiting for reply from 89.221.210.188:123 (0.debian.pool.ntp.org).
Jun 15 15:44:31 devel1 systemd-timesyncd[4020]: Synchronized to time server for the first time 89.221.212.46:123 (0.debian.pool.ntp.org).

# systemctl status -l corosync
● corosync.service — Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-06-15 15:46:01 CEST; 1 day 18h ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Main PID: 8412 (corosync)
Tasks: 9 (limit: 7372)
Memory: 147.6M
CGroup: /system.slice/corosync.service
└─8412 /usr/sbin/corosync -f

Jun 15 15:46:24 devel1 corosync[8412]: [KNET ] rx: host: 4 link: 0 is up
Jun 15 15:46:24 devel1 corosync[8412]: [KNET ] host: host: 4 (passive) best link: 0 (pri: 1)
Jun 15 15:46:25 devel1 corosync[8412]: [TOTEM ] A new membership (1.1860e) was formed. Members joined: 4
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [CPG ] downlist left_list: 0 received
Jun 15 15:46:25 devel1 corosync[8412]: [QUORUM] Members[5]: 1 2 3 4 5
Jun 15 15:46:25 devel1 corosync[8412]: [MAIN ] Completed service synchronization, ready to provide service.

# systemctl status -l pveproxy
● pveproxy.service — PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-06-16 15:51:44 CEST; 18h ago
Process: 16677 ExecStartPre=/usr/bin/pvecm updatecerts —silent (code=exited, status=0/SUCCESS)
Process: 16686 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
Process: 18296 ExecReload=/usr/bin/pveproxy restart (code=exited, status=0/SUCCESS)
Main PID: 16688 (pveproxy)
Tasks: 5 (limit: 7372)
Memory: 288.8M
CGroup: /system.slice/pveproxy.service
├─ 1561 pveproxy worker
├─ 1812 pveproxy worker
├─ 2197 pveproxy worker
├─16688 pveproxy
└─36856 pveproxy worker (shutdown)

Jun 17 10:38:49 devel1 pveproxy[16688]: starting 1 worker(s)
Jun 17 10:38:49 devel1 pveproxy[16688]: worker 1812 started
Jun 17 10:38:49 devel1 pveproxy[1812]: Clearing outdated entries from certificate cache
Jun 17 10:39:31 devel1 pveproxy[48057]: worker exit
Jun 17 10:40:48 devel1 pveproxy[16688]: worker 1386 finished
Jun 17 10:40:48 devel1 pveproxy[16688]: starting 1 worker(s)
Jun 17 10:40:48 devel1 pveproxy[16688]: worker 2197 started
Jun 17 10:40:49 devel1 pveproxy[2196]: got inotify poll request in wrong process — disabling inotify
Jun 17 10:40:49 devel1 pveproxy[2197]: Clearing outdated entries from certificate cache
Jun 17 10:40:50 devel1 pveproxy[2196]: worker exit

# systemctl status -l pvedaemon
● pvedaemon.service — PVE API Daemon
Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-06-16 15:51:50 CEST; 18h ago
Process: 16725 ExecStart=/usr/bin/pvedaemon start (code=exited, status=0/SUCCESS)
Main PID: 16728 (pvedaemon)
Tasks: 6 (limit: 7372)
Memory: 298.7M
CGroup: /system.slice/pvedaemon.service
├─ 827 pvedaemon worker
├─16728 pvedaemon
├─36835 task UPID:devel1:00008FE3:1265C8BA:5EE9C7CB:vncproxy:261:zima@ldap:
├─36837 /usr/bin/perl /usr/sbin/qm vncproxy 261
├─47263 pvedaemon worker
└─48069 pvedaemon worker

Jun 17 10:40:56 devel1 pvedaemon[47263]: successful auth for user ‘icinga@pve’
Jun 17 10:41:01 devel1 pvedaemon[47263]: successful auth for user ‘icinga@pve’
Jun 17 10:41:04 devel1 pvedaemon[48069]: successful auth for user ‘icinga@pve’
Jun 17 10:41:04 devel1 pvedaemon[827]: successful auth for user ‘icinga@pve’
Jun 17 10:41:14 devel1 pvedaemon[827]: successful auth for user ‘icinga@pve’
Jun 17 10:41:22 devel1 pvedaemon[47263]: successful auth for user ‘icinga@pve’
Jun 17 10:41:39 devel1 pvedaemon[48069]: successful auth for user ‘ @ldap’
Jun 17 10:41:50 devel1 pvedaemon[827]: successful auth for user ‘ @ldap’
Jun 17 10:41:50 devel1 pvedaemon[48069]: successful auth for user ‘icinga@pve’
Jun 17 10:41:52 devel1 pvedaemon[47263]: successful auth for user ‘icinga@pve’

Источник


0

1

Сталкивался кто, не получается сделать кластер с 2х нод в разных сетях.

нода1 — 1.2.3.4
нода2 — 8.7.6.5

на первой pvecm create clust — создается кластер.
на второй pvecm add 1.2.3.4 — спрашивает пароль, а потом

Dec 17 14:57:55 m11617 pmxcfs[7483]: [quorum] crit: quorum_initialize failed: 2
Dec 17 14:57:55 m11617 pmxcfs[7483]: [confdb] crit: cmap_initialize failed: 2
Dec 17 14:57:55 m11617 pmxcfs[7483]: [dcdb] crit: cpg_initialize failed: 2
Dec 17 14:57:55 m11617 pmxcfs[7483]: [status] crit: cpg_initialize failed: 2
Dec 17 14:58:00 m11617 systemd[1]: Starting Proxmox VE replication runner...
Dec 17 14:58:00 m11617 pvesr[8863]: error with cfs lock 'file-replication_cfg': no quorum!
Dec 17 14:58:00 m11617 systemd[1]: pvesr.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 17 14:58:00 m11617 systemd[1]: Failed to start Proxmox VE replication runner.
Dec 17 14:58:00 m11617 systemd[1]: pvesr.service: Unit entered failed state.
Dec 17 14:58:00 m11617 systemd[1]: pvesr.service: Failed with result 'exit-code'.
Dec 17 14:58:01 m11617 pmxcfs[7483]: [quorum] crit: quorum_initialize failed: 2
Dec 17 14:58:01 m11617 pmxcfs[7483]: [confdb] crit: cmap_initialize failed: 2
Dec 17 14:58:01 m11617 pmxcfs[7483]: [dcdb] crit: cpg_initialize failed: 2
Dec 17 14:58:01 m11617 pmxcfs[7483]: [status] crit: cpg_initialize failed: 2
Dec 17 14:58:01 m11617 cron[3211]: (*system*vzdump) CAN'T OPEN SYMLINK (/etc/cron.d/vzdump)
Dec 17 14:58:07 m11617 pmxcfs[7483]: [quorum] crit: quorum_initialize failed: 2
Dec 17 14:58:07 m11617 pmxcfs[7483]: [confdb] crit: cmap_initialize failed: 2
Dec 17 14:58:07 m11617 pmxcfs[7483]: [dcdb] crit: cpg_initialize failed: 2
Dec 17 14:58:07 m11617 pmxcfs[7483]: [status] crit: cpg_initialize failed: 2
Dec 17 14:58:13 m11617 pmxcfs[7483]: [quorum] crit: quorum_initialize failed: 2
Dec 17 14:58:13 m11617 pmxcfs[7483]: [confdb] crit: cmap_initialize failed: 2
Dec 17 14:58:13 m11617 pmxcfs[7483]: [dcdb] crit: cpg_initialize failed: 2
Dec 17 14:58:13 m11617 pmxcfs[7483]: [status] crit: cpg_initialize failed: 2

вот такая ругня…
при чем в кластер нода как-бы добавляется и показывается, но доступа к ней нет.

proxmox 5.3-5

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Error connection activation failed 53 the wi fi network could not be found
  • Error connection 1 errorcode 10060
  • Error connecting with ssl eof was observed that violates the protocol
  • Error connecting with ssl delphi
  • Error connecting usbasp 3 ch341a

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии