Discussion:
Getting ZFS pools back.
Willem Jan Withagen
2018-04-28 15:42:41 UTC
Permalink
Hi,

I had this server crash on me, and first it just complained about not
being able to boot because it could not find the guid.

Now I cannot even import the pools any longer.
Althoug zdb -l /dev/ada0 still gives me data that indicates that there
should be a ZFS pool on that partition.

Any suggestions on how to get the pools/data back online?

Help would be highly appreciated, since restoring it from backups is
going to quite some work.

Thanx,
--WjW
Richard Yao
2018-04-28 18:43:31 UTC
Permalink
What is the output of ‘zpool import‘ with no arguments?
Hi,
I had this server crash on me, and first it just complained about not being able to boot because it could not find the guid.
Now I cannot even import the pools any longer.
Althoug zdb -l /dev/ada0 still gives me data that indicates that there should be a ZFS pool on that partition.
Any suggestions on how to get the pools/data back online?
Help would be highly appreciated, since restoring it from backups is going to quite some work.
Thanx,
--WjW
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
Willem Jan Withagen
2018-04-29 14:29:24 UTC
Permalink
Post by Richard Yao
What is the output of ‘zpool import‘ with no arguments?
If I boot thru aa mem-stick....
# zpool import
#

So, empty line
In the mean time I rebuild the system.

Was able to retreive the data by a convoluted incantation of
'zpool import -fmNF' or something like that.
Made a full backup, and started fresh.

--WjW
Post by Richard Yao
Hi,
I had this server crash on me, and first it just complained about not being able to boot because it could not find the guid.
Now I cannot even import the pools any longer.
Althoug zdb -l /dev/ada0 still gives me data that indicates that there should be a ZFS pool on that partition.
Any suggestions on how to get the pools/data back online?
Help would be highly appreciated, since restoring it from backups is going to quite some work.
Thanx,
--WjW
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
Alan Somers
2018-04-29 14:40:49 UTC
Permalink
So your kernel couldn't find the pool. That might be due to a GEOM module
that wasn't loaded but should've been (were you using gmirror or geli or
something?). Or you might've accidentally destroyed the pool. It would
still show up in "zdb -l", albeit in the destroyed state. Or you might've
accidentally destroyed the partition. If the pool had resided on the
disk's last partition, then "zdb -l /dev/ada0" still would've seen the
label, since there's a copy of the label at the end of the device. But if
you've reused the disk, then there's no way to know for sure.
-Alan
Post by Willem Jan Withagen
Post by Richard Yao
What is the output of ‘zpool import‘ with no arguments?
If I boot thru aa mem-stick....
# zpool import
#
So, empty line
In the mean time I rebuild the system.
Was able to retreive the data by a convoluted incantation of
'zpool import -fmNF' or something like that.
Made a full backup, and started fresh.
--WjW
Post by Richard Yao
Post by Willem Jan Withagen
Hi,
I had this server crash on me, and first it just complained about not
being able to boot because it could not find the guid.
Now I cannot even import the pools any longer.
Althoug zdb -l /dev/ada0 still gives me data that indicates that there
should be a ZFS pool on that partition.
Any suggestions on how to get the pools/data back online?
Help would be highly appreciated, since restoring it from backups is
going to quite some work.
Thanx,
--WjW
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
reebsd.org"
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-hackers
Willem Jan Withagen
2018-04-29 17:27:12 UTC
Permalink
So your kernel couldn't find the pool.  That might be due to a GEOM
module that wasn't loaded but should've been (were you using gmirror or
geli or something?).  Or you might've accidentally destroyed the pool.
It would still show up in "zdb -l", albeit in the destroyed state.  Or
you might've accidentally destroyed the partition.  If the pool had
resided on the disk's last partition, then "zdb -l /dev/ada0" still
would've seen the label, since there's a copy of the label at the end of
the device.  But if you've reused the disk, then there's no way to know
for sure.
Hi Alan,

I still have one of the original disks of the mirror, but the
system/hardware causing trouble is back in use in the DC.
No geli, or anything other that basic GEOM was involved.
Disks were running on GPT:
boot
swap
zroot
zfsdata

And yes zdb -l was able to find both pools: zroot, and zfsdata.
And I'm pretty sure I did not destroy them on purpose. ;-)

Trouble started when I installed (freebsd-update) 11.1 over a running
10.4. Which is sort of scarry?

But because the system needed to go back on the air, I did only so much
to recover the original stuff. But it just kept naging me over the GUID
it could not find. So for sake of progress I reinstalled the system on
one of the mirror disks, keeping the other one.

So I could hook that disk up to my Freetest play box, and see what that
brings. If anyone is interested.
But than again the zpool import could have fixed what was broken in the
first place. Haven't looked at it yet, since the 12 hour straight
session yesterday was enough for the weekend.

--WjW
-Alan
What is the output of ‘zpool import‘ with no arguments?
If I boot thru aa mem-stick....
# zpool import
#
So, empty line
In the mean time I rebuild the system.
Was able to retreive the data by a convoluted incantation of
'zpool import -fmNF' or something like that.
Made a full backup, and started fresh.
--WjW
On Apr 28, 2018, at 11:42 AM, Willem Jan Withagen
Hi,
I had this server crash on me, and first it just complained
about not being able to boot because it could not find the guid.
Now I cannot even import the pools any longer.
Althoug zdb -l /dev/ada0 still gives me data that indicates
that there should be a ZFS pool on that partition.
Any suggestions on how to get the pools/data back online?
Help would be highly appreciated, since restoring it from
backups is going to quite some work.
Thanx,
--WjW
Jan Knepper
2018-04-29 17:57:07 UTC
Permalink
Post by Willem Jan Withagen
Trouble started when I installed (freebsd-update) 11.1 over a running
10.4. Which is sort of scarry?
This does sounds 'scary' as I am planning to do this in the (near) future...

Has anyone else experienced issues like this?

Generally I do build the new system software on a running system, but
then go to single user mode to perform the actual install.

I have done many upgrades like that over 18 or so years and never seen
or heard of an issue alike this.

Thanks!

ManiaC++
Jan Knepper
Warner Losh
2018-04-29 18:21:19 UTC
Permalink
Post by Jan Knepper
Post by Willem Jan Withagen
Trouble started when I installed (freebsd-update) 11.1 over a running
10.4. Which is sort of scarry?
This does sounds 'scary' as I am planning to do this in the (near) future...
Has anyone else experienced issues like this?
Generally I do build the new system software on a running system, but then
go to single user mode to perform the actual install.
I have done many upgrades like that over 18 or so years and never seen or
heard of an issue alike this.
11.x binaries aren't guaranteed to work with a 10.x kernel. So that's a bit
of a problem. freebsd-update shouldn't have let you do that either.

However, most 11.x binaries work well enough to at least bootstrap / fix
problems if booted on a 10.x kernel due to targeted forward compatibility.
You shouldn't count on it for long, but it generally won't totally brick
your box. In the past, and I believe this is still true, they work well
enough to compile and install a new kernel after pulling sources. The 10.x
-> 11.x syscall changes are such that you should be fine. At least if you
are on UFS.

However, the ZFS ioctls and such are in the bag of 'don't specifically
guarantee and also they change a lot' so that may be why you can't mount
ZFS by UUID. I've not checked to see if there's specifically an issue here
or not. The ZFS ABI is somewhat more fragile than other parts of the
system, so you may have issues here.

If all else fails, you may be able to PXE boot an 11 kernel, or boot off a
USB memstick image to install a kernel.

Generally, while we don't guarantee forward compatibility (running newer
binaries on older kernels), we've generally built enough forward compat so
that things work well enough to complete the upgrade. That's why you
haven't hit an issue in 18 years of upgrading. However, the velocity of
syscall additions has increased, and we've gone from fairly stable (stale?)
ABIs for UFS to a more dynamic one for ZFS where backwards compat is a bit
of a crap shoot and forward compat isn't really there at all. That's likely
why you've hit a speed bump here.

Warner
Jan Knepper
2018-04-29 18:31:30 UTC
Permalink
Post by Willem Jan Withagen
Trouble started when I installed (freebsd-update) 11.1 over a
running 10.4. Which is sort of scarry?
This does sounds 'scary' as I am planning to do this in the (near) future...
Has anyone else experienced issues like this?
Generally I do build the new system software on a running system,
but then go to single user mode to perform the actual install.
I have done many upgrades like that over 18 or so years and never
seen or heard of an issue alike this.
11.x binaries aren't guaranteed to work with a 10.x kernel. So that's
a bit of a problem. freebsd-update shouldn't have let you do that either.
The process I have used so far is to svnup, build, reboot...
Post by Willem Jan Withagen
However, most 11.x binaries work well enough to at least bootstrap /
fix problems if booted on a 10.x kernel due to targeted forward
compatibility. You shouldn't count on it for long, but it generally
won't totally brick your box. In the past, and I believe this is still
true, they work well enough to compile and install a new kernel after
pulling sources. The 10.x -> 11.x syscall changes are such that you
should be fine. At least if you are on UFS.
However, the ZFS ioctls and such are in the bag of 'don't specifically
guarantee and also they change a lot' so that may be why you can't
mount ZFS by UUID. I've not checked to see if there's specifically an
issue here or not. The ZFS ABI is somewhat more fragile than other
parts of the system, so you may have issues here.
If all else fails, you may be able to PXE boot an 11 kernel, or boot
off a USB memstick image to install a kernel.
Generally, while we don't guarantee forward compatibility (running
newer binaries on older kernels), we've generally built enough forward
compat so that things work well enough to complete the upgrade. That's
why you haven't hit an issue in 18 years of upgrading. However, the
velocity of syscall additions has increased, and we've gone from
fairly stable (stale?) ABIs for UFS to a more dynamic one for ZFS
where backwards compat is a bit of a crap shoot and forward compat
isn't really there at all. That's likely why you've hit a speed bump here.
I have not closely looked at the procedures outlined in
/usr/src/UPDATING for 11.x. But do I read correctly that performing a
buildworld, buildkernel, then installworld and reboot to update from
10.4 to 11.x does not work?

Thanks!

ManiaC++
Jan Knepper
Warner Losh
2018-04-29 18:34:06 UTC
Permalink
Post by Warner Losh
However, most 11.x binaries work well enough to at least bootstrap / fix
problems if booted on a 10.x kernel due to targeted forward compatibility.
You shouldn't count on it for long, but it generally won't totally brick
your box. In the past, and I believe this is still true, they work well
enough to compile and install a new kernel after pulling sources. The 10.x
-> 11.x syscall changes are such that you should be fine. At least if you
are on UFS.
However, the ZFS ioctls and such are in the bag of 'don't specifically
guarantee and also they change a lot' so that may be why you can't mount
ZFS by UUID. I've not checked to see if there's specifically an issue here
or not. The ZFS ABI is somewhat more fragile than other parts of the
system, so you may have issues here.
If all else fails, you may be able to PXE boot an 11 kernel, or boot off a
USB memstick image to install a kernel.
Generally, while we don't guarantee forward compatibility (running newer
binaries on older kernels), we've generally built enough forward compat so
that things work well enough to complete the upgrade. That's why you
haven't hit an issue in 18 years of upgrading. However, the velocity of
syscall additions has increased, and we've gone from fairly stable (stale?)
ABIs for UFS to a more dynamic one for ZFS where backwards compat is a bit
of a crap shoot and forward compat isn't really there at all. That's likely
why you've hit a speed bump here.
I have not closely looked at the procedures outlined in /usr/src/UPDATING
for 11.x. But do I read correctly that performing a buildworld,
buildkernel, then installworld and reboot to update from 10.4 to 11.x does
not work?
No. That will work. If you always install a new kernel and reboot
(especially across major releases) and then install the new binaries,
you're safe. You won't get into a situation where new binaries are running
on an old kernel. As far as I know that's not broken, even with the strange
ABI issues I talk about. That's only when you're running 11.x binaries on a
10.x kernel, not the other way around.

Warner
Craig Leres
2018-04-29 19:36:05 UTC
Permalink
Post by Warner Losh
If you always install a new kernel and reboot
(especially across major releases) and then install the new binaries,
you're safe.
I upgraded 40+ systems from 10.3-RELEASE to 11.1-RELEASE over the last
few weeks including 8 or so with zfs partitions (but all boot off of
ufs2). The work flow I converged on was:

- (I use rcs for configs so) co -l all customized config files
- Check /etc/freebsd-update.conf for the desired config
- Download the upgrade updates (freebsd-update upgrade -r 11.1-RELEASE)
- Check /etc/rc.conf and disable kern_securelevel if enabled
- Check/update /etc/resolv.conf if using bind9*
- Switch to /usr/bin/sshd if using openssh-portable
- Copy and install custom 11.1 kernel from my build server
- Stop most services
- Save a list of installed packages:
pkg info|sed -e 's/-[0-9a-zA-Z._,]* *.*//' > /var/tmp/a
- Remove all packages (pkg-static delete -fya)
- Reboot
- Run "freebsd-update install" three times
- Reinstall packages:
pkg update -f
pkg clean -ay
pkg install -y `cat /var/tmp/a`
- Check/reset/checkin configs and reboot

I had zero^H^H^H^Hno zfs issues.
Post by Warner Losh
Most of my upgrades went smooth, other than being pestered by
files that are only changed in verssion no and/or comments.
I also find this annoying but started manually updating things that were
problematic before starting which minimized freebsd-update merging.

Craig
Willem Jan Withagen
2018-04-29 21:20:02 UTC
Permalink
Post by Willem Jan Withagen
Trouble started when I installed (freebsd-update) 11.1 over a
running 10.4. Which is sort of scarry?
This does sounds 'scary' as I am planning to do this in the (near) future...
Has anyone else experienced issues like this?
Generally I do build the new system software on a running system,
but then go to single user mode to perform the actual install.
I have done many upgrades like that over 18 or so years and never
seen or heard of an issue alike this.
11.x binaries aren't guaranteed to work with a 10.x kernel. So that's a
bit of a problem. freebsd-update shouldn't have let you do that either.
However, most 11.x binaries work well enough to at least bootstrap / fix
problems if booted on a 10.x kernel due to targeted forward
compatibility. You shouldn't count on it for long, but it generally
won't totally brick your box. In the past, and I believe this is still
true, they work well enough to compile and install a new kernel after
pulling sources. The 10.x -> 11.x syscall changes are such that you
should be fine. At least if you are on UFS.
I have been doing those kind of this for years and years. Even upgrading
over NFS and stuff. Sometimes it is a bit too close to the sun and
things burn. But never crash this bad.
Post by Willem Jan Withagen
However, the ZFS ioctls and such are in the bag of 'don't specifically
guarantee and also they change a lot' so that may be why you can't mount
ZFS by UUID. I've not checked to see if there's specifically an issue
here or not. The ZFS ABI is somewhat more fragile than other parts of
the system, so you may have issues here.
If all else fails, you may be able to PXE boot an 11 kernel, or boot off
a USB memstick image to install a kernel.
Tried just about replace everything in both the boot-partition (First
growing it to take > 64K gptzfsboot) and in /boot from the memstick.
But the error never went away.

Never had ZFS die on me this bad, that I could not get it back.
Post by Willem Jan Withagen
Generally, while we don't guarantee forward compatibility (running newer
binaries on older kernels), we've generally built enough forward compat
so that things work well enough to complete the upgrade. That's why you
haven't hit an issue in 18 years of upgrading. However, the velocity of
syscall additions has increased, and we've gone from fairly stable
(stale?) ABIs for UFS to a more dynamic one for ZFS where backwards
compat is a bit of a crap shoot and forward compat isn't really there at
all. That's likely why you've hit a speed bump here.
Come to think of it, I did not do this step with freebsd-update, since I
was not at an official release yet. I was going to 11.1-RELEASE, to be
able to start using freebsd-update.

So I don't think I did just do that.... But I tried so much yesterday.
Normally I would installkernel, reboot, installworld, mergemaster,
reboot for systems that are not up for freebsd-update.

--WjW
Willem Jan Withagen
2018-04-30 10:37:45 UTC
Permalink
Post by Willem Jan Withagen
        Trouble started when I installed (freebsd-update) 11.1 over a
        running 10.4. Which is sort of scarry?
    This does sounds 'scary' as I am planning to do this in the (near)
    future...
    Has anyone else experienced issues like this?
    Generally I do build the new system software on a running system,
    but then go to single user mode to perform the actual install.
    I have done many upgrades like that over 18 or so years and never
    seen or heard of an issue alike this.
11.x binaries aren't guaranteed to work with a 10.x kernel. So that's
a bit of a problem. freebsd-update shouldn't have let you do that either.
However, most 11.x binaries work well enough to at least bootstrap /
fix problems if booted on a 10.x kernel due to targeted forward
compatibility. You shouldn't count on it for long, but it generally
won't totally brick your box. In the past, and I believe this is still
true, they work well enough to compile and install a new kernel after
pulling sources. The 10.x -> 11.x syscall changes are such that you
should be fine. At least if you are on UFS.
I have been doing those kind of this for years and years. Even upgrading
over NFS and stuff. Sometimes it is a bit too close to the sun and
things burn. But never crash this bad.
However, the ZFS ioctls and such are in the bag of 'don't specifically
guarantee and also they change a lot' so that may be why you can't
mount ZFS by UUID. I've not checked to see if there's specifically an
issue here or not. The ZFS ABI is somewhat more fragile than other
parts of the system, so you may have issues here.
If all else fails, you may be able to PXE boot an 11 kernel, or boot
off a USB memstick image to install a kernel.
Tried just about replace everything in both the boot-partition (First
growing it to take > 64K gptzfsboot) and in /boot from the memstick.
But the error never went away.
Never had ZFS die on me this bad, that I could not get it back.
Generally, while we don't guarantee forward compatibility (running
newer binaries on older kernels), we've generally built enough forward
compat so that things work well enough to complete the upgrade. That's
why you haven't hit an issue in 18 years of upgrading. However, the
velocity of syscall additions has increased, and we've gone from
fairly stable (stale?) ABIs for UFS to a more dynamic one for ZFS
where backwards compat is a bit of a crap shoot and forward compat
isn't really there at all. That's likely why you've hit a speed bump
here.
Come to think of it, I did not do this step with freebsd-update, since I
was not at an official release yet. I was going to 11.1-RELEASE, to be
able to start using freebsd-update.
So I don't think I did just do that.... But I tried so much yesterday.
Normally I would installkernel, reboot, installworld, mergemaster,
reboot for systems that are not up for freebsd-update.
Right,

The story gets even sadder .....
Took the "spare" disk home, and just connected it to an older SuperMicro
server I had lying about for Ceph tests. And lo and behold, it just boots.

So that system got upgraded from: 10.2 -> 10.4 -> 11.1
No complaints about anything.

So now I'm inclined to point at older hardware with an old bios, which
confused ZFS, or probably more precisely gptzfsboot.

From dmidecode:
System Information
Manufacturer: Supermicro
Product Name: H8SGL
Version: 1234567890
BIOS Information
Vendor: American Megatrends Inc.
Version: 3.5
Release Date: 11/25/2013
Address: 0xF0000

We only have 1 of those, so further investigation, and or tinkering, in
combo with the hardware will be impossible.

--WjW
Willem Jan Withagen
2018-05-01 21:25:41 UTC
Permalink
Post by Willem Jan Withagen
Post by Willem Jan Withagen
        Trouble started when I installed (freebsd-update) 11.1 over a
        running 10.4. Which is sort of scarry?
    This does sounds 'scary' as I am planning to do this in the (near)
    future...
    Has anyone else experienced issues like this?
    Generally I do build the new system software on a running system,
    but then go to single user mode to perform the actual install.
    I have done many upgrades like that over 18 or so years and never
    seen or heard of an issue alike this.
11.x binaries aren't guaranteed to work with a 10.x kernel. So that's
a bit of a problem. freebsd-update shouldn't have let you do that either.
However, most 11.x binaries work well enough to at least bootstrap /
fix problems if booted on a 10.x kernel due to targeted forward
compatibility. You shouldn't count on it for long, but it generally
won't totally brick your box. In the past, and I believe this is
still true, they work well enough to compile and install a new kernel
after pulling sources. The 10.x -> 11.x syscall changes are such that
you should be fine. At least if you are on UFS.
I have been doing those kind of this for years and years. Even
upgrading over NFS and stuff. Sometimes it is a bit too close to the
sun and things burn. But never crash this bad.
However, the ZFS ioctls and such are in the bag of 'don't
specifically guarantee and also they change a lot' so that may be why
you can't mount ZFS by UUID. I've not checked to see if there's
specifically an issue here or not. The ZFS ABI is somewhat more
fragile than other parts of the system, so you may have issues here.
If all else fails, you may be able to PXE boot an 11 kernel, or boot
off a USB memstick image to install a kernel.
Tried just about replace everything in both the boot-partition (First
growing it to take > 64K gptzfsboot) and in /boot from the memstick.
But the error never went away.
Never had ZFS die on me this bad, that I could not get it back.
Generally, while we don't guarantee forward compatibility (running
newer binaries on older kernels), we've generally built enough
forward compat so that things work well enough to complete the
upgrade. That's why you haven't hit an issue in 18 years of
upgrading. However, the velocity of syscall additions has increased,
and we've gone from fairly stable (stale?) ABIs for UFS to a more
dynamic one for ZFS where backwards compat is a bit of a crap shoot
and forward compat isn't really there at all. That's likely why
you've hit a speed bump here.
Come to think of it, I did not do this step with freebsd-update, since
I was not at an official release yet. I was going to 11.1-RELEASE, to
be able to start using freebsd-update.
So I don't think I did just do that.... But I tried so much yesterday.
Normally I would installkernel, reboot, installworld, mergemaster,
reboot for systems that are not up for freebsd-update.
Right,
The story gets even sadder .....
Took the "spare" disk home, and just connected it to an older SuperMicro
server I had lying about for Ceph tests. And lo and behold, it just boots.
So that system got upgraded from: 10.2 -> 10.4 -> 11.1
No complaints about anything.
So now I'm inclined to point at older hardware with an old bios, which
confused ZFS, or probably more precisely gptzfsboot.
System Information
        Manufacturer: Supermicro
        Product Name: H8SGL
        Version: 1234567890
BIOS Information
        Vendor: American Megatrends Inc.
        Version: 3.5
        Release Date: 11/25/2013
        Address: 0xF0000
We only have 1 of those, so further investigation, and or tinkering, in
combo with the hardware will be impossible.
Today i found the messages below in my daily report of the server:
+NMI ISA 3c, EISA ff
+NMI ISA 3c, EISA ff
+NMI ISA 3c, EISA ff
+NMI ... going to debugger
+NMI ... going to debugger
+NMI ISA 3c, EISA ff
+NMI ISA 2c, EISA ff
+NMI ... going to debugger
+NMI ... going to debugger
+NMI ISA 2c, EISA ff
+NMI ISA 3c, EISA ff
+NMI ... going to debugger
+NMI ... going to debugger
+NMI ... going to debugger
+NMI ISA 3c, EISA ff
+NMI ... going to debugger

Could these things have anything to do with the problem I had with
trying to find the pools.

--WjW
Andriy Gapon
2018-05-02 09:47:22 UTC
Permalink
So now I'm inclined to point at older hardware with an old bios, which confused
ZFS, or probably more precisely gptzfsboot.
I think that this alone wouldn't explain the problems you had with zpool import.
Maybe you had multiple issues at the same time.
- something with BIOS caused trouble with booting
- something else, e.g. kernel-userland mismatch, caused zpool commands to misbehave
--
Andriy Gapon
Willem Jan Withagen
2018-05-02 12:00:43 UTC
Permalink
Post by Andriy Gapon
So now I'm inclined to point at older hardware with an old bios, which confused
ZFS, or probably more precisely gptzfsboot.
I think that this alone wouldn't explain the problems you had with zpool import.
Maybe you had multiple issues at the same time.
- something with BIOS caused trouble with booting
- something else, e.g. kernel-userland mismatch, caused zpool commands to misbehave
More than likely you are right.
Frustrating was, how ever, that most trouble I run into, I can fix.
Doing FreeBSD for so long, makes that I don't easily give up.
But after 4 hours of fiddling with bootsectors, loaders, setting and
what not, I gave up. Also because I did not want to jeopardize the data.
(It is on backup, but the backup was already 8 hours stale)

But hey, system is backup. And haven't heart anybody complaining about
missing data, of other failures....

--WjW

Willem Jan Withagen
2018-04-29 18:32:47 UTC
Permalink
Post by Jan Knepper
Post by Willem Jan Withagen
Trouble started when I installed (freebsd-update) 11.1 over a running
10.4. Which is sort of scarry?
This does sounds 'scary' as I am planning to do this in the (near) future...
Has anyone else experienced issues like this?
Generally I do build the new system software on a running system, but
then go to single user mode to perform the actual install.
I have done many upgrades like that over 18 or so years and never seen
or heard of an issue alike this.
Hi Jan,

Most of my upgrades went smooth, other than being pestered by files that
are only changed in verssion no and/or comments.

This is a rather old server that I installed zfs-on-root, when there
were only howtos and no automagic installers. So I guess that it might
even be from the 9.x days. It went through several manual upgrades and
at least once a online disk replacement with growing disks (500G to 4T).
Al that went well, but could very well be that some odd bits and pieces
were (missing) left overs.

So Don't excluded that there is pilot error involved.
I tried Google to find more about fixing the system when this error got
reported, but only a rather hefty session with Andriy Gapon, I think.
And that was for 9.x or so.
Nothing else...

So given my need to go on, I just reinstalled.

--WjW
Jan Knepper
2018-04-29 18:37:21 UTC
Permalink
Post by Willem Jan Withagen
Post by Jan Knepper
Post by Willem Jan Withagen
Trouble started when I installed (freebsd-update) 11.1 over a
running 10.4. Which is sort of scarry?
This does sounds 'scary' as I am planning to do this in the (near) future...
Has anyone else experienced issues like this?
Generally I do build the new system software on a running system, but
then go to single user mode to perform the actual install.
I have done many upgrades like that over 18 or so years and never
seen or heard of an issue alike this.
Most of my upgrades went smooth, other than being pestered by files
that are only changed in verssion no and/or comments.
Yeah... Know about those... mergemaster has a option to elevate some of
that pain though...
Post by Willem Jan Withagen
This is a rather old server that I installed zfs-on-root, when there
were only howtos and no automagic installers. So I guess that it might
even be from the 9.x days. It went through several manual upgrades and
at least once a online disk replacement with growing disks (500G to 4T).
Al that went well, but could very well be that some odd bits and pieces
were (missing) left overs.
OK... That might explain a thing or two... I do recall installing 9.x I
think on a new server (hardware) and having to do the ZFS setup
manually... Everything currently runs 10.x
Post by Willem Jan Withagen
So Don't excluded that there is pilot error involved.
I tried Google to find more about fixing the system when this error
got reported, but only a rather hefty session with Andriy Gapon, I think.
And that was for 9.x or so.
Nothing else...
So given my need to go on, I just reinstalled.
OK! Thank you for letting me know!

ManiaC++
Jan Knepper
Loading...