Thursday, December 4, 2014

CentOS 7, ipv6 and yum

If you need to disable IPv6 on CentOS 7, you can follow this method that you can find on a lot of sites using google:


vi /etc/sysctl.d/disable-ipv6.conf

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1


Then issue sysctl -p

However you still try to contact IPv6 addresses, so the solution consists in adding this line

ip_resolve=4

to

/etc/yum.conf

mysql-workbench issues on Ubuntu 14.10 workarounds

For "data grid not displaying resultset is blank" download and install the deb packages linked at comment #24 of the bug #1376154

Then, for the problem with the SSH tunnel, look at the bug #1385147 and apply the patch as from the comment #7

Friday, November 21, 2014

mysql-workbench issues on Ubuntu 14.10

Since the Ubuntu 14.10 installation I am no more able to use mysql-workbench.


Data grid not displaying resultset is blank

https://bugs.launchpad.net/ubuntu/+source/mysql-workbench/+bug/1376154

 
Unable to connect to remote MySQL server via SSH using MySQLWorkbench 

Friday, October 10, 2014

falcao.js: an IP Address and MAC address Tracker / Monitor

Just for curiosity I managed to play with Node.js in the last times.
As stated in other parts of this blog, I am not a developer. I know a little of programming theory, I have used various languages, but I'm not a programmer.

So, just to try node.js, I've developed a tool called Falcao.js

Falcao.js is a remake of Hawk (hosted on Sourceforge):

"Hawk is an IP address tracking utility to monitor and compare what's answering on your network and what's in DNS. It can identify unauthorized address usage, or show you which addresses in DNS haven't been used in a while and can be reclaimed".

Falcao.js has the same purposes.

You can find more information on the github repository: https://github.com/alcir/falcao.js, take a look to the Wiki.



Monday, September 22, 2014

glpi entities expanded by default

ajax/entitytreesons.php

//   $path['expanded'] = isset($ancestors[$ID]);
     $path['expanded'] = true;

//   $path['expanded'] = isset($ancestors[$row['id']]);
     $path['expanded'] = true;

Friday, September 19, 2014

SmartOS: move (migrate) a VM to another Global Zone

To move a running virtual machine, either a zone or a KVM, from an hypervisor (Global Zone) to another, I usually follow these steps in an handicraft way.

In order to minimize downtime, there are 2 steps.
The first snapshot and transfer of the ZFS filesystem will be done without halting the VM.
In the second step we will shutdown the VM, take another snapshot, then we will send it using an incremental transfer, that is very fast.

First of all you need the UUID of the VM you want to move from a global zone to another GZ.

[root@gz1 /]#vmadm list
UUID                                  TYPE  RAM      STATE             ALIAS
561b686e-3119-4ab0-932e-20fc944fb001  KVM   512      running           vm1
e44f3c76-4acb-11e3-a536-a7cfa8b66838  OS    512      running           vm2
b803b5b9-bc86-4d0f-b450-16862e7bd7ed  OS    2048     running           vm3



Now we need the list of all the ZFS filesystem related to that VM.

[root@gz1 ~]# zfs list -o name | grep 561b686e-3119-4ab0-932e-20fc944fb001

zones/561b686e-3119-4ab0-932e-20fc944fb001
zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0
zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1
zones/cores/561b686e-3119-4ab0-932e-20fc944fb001


Now we must create a snapshot of every ZFS.
Note: at this time the VM doesn't need to be stopped, you can leave it up and running, just to minimize downtime.

[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend
[root@gz1 ~]# zfs snapshot zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend


Now it is time to send these snapshots to the destination. It may require a lot of time.

[root@gz1 ~]# zfs send zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend | ssh gz2.domain zfs receive -v zones/561b686e-3119-4ab0-932e-20fc944fb001[root@gz1 ~]# zfs send zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend | ssh gz2.domain zfs receive -v zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0[root@gz1 ~]# zfs send zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend | ssh gz2.domain zfs receive -v zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1
[root@gz1 ~]#
zfs send zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend | ssh
gz2.domain zfs receive -v zones/cores/561b686e-3119-4ab0-932e-20fc944fb001

After that, we have to stop the virtual machine.

[root@gz1 ~]# vmadm stop 561b686e-3119-4ab0-932e-20fc944fb001

And get additional snapshots of the ZFS filesystems, like before, giving them a different name. Now, these snapshots are potentially in a consistent state, since the operating system inside the virtual machine is not running.

[root@gz1 ~]# zfs snapshot zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend-last 
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend-last 
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend-last 
[root@gz1 ~]# zfs snapshot zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend-last


Finally, we have to send these ZFS snapshots using incremental send. It will take a little bit of time.

[root@gz1 ~]# zfs send -i zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend zones/561b686e-3119-4ab0-932e-20fc944fb001@tosend-last | ssh gz2.domain zfs receive -Fv zones/561b686e-3119-4ab0-932e-20fc944fb001
[root@gz1 ~]#
zfs send -i zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0@tosend-last | ssh
gz2.domain zfs receive -Fv zones/561b686e-3119-4ab0-932e-20fc944fb001-disk0
[root@gz1 ~]#
zfs send -i zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1@tosend-last | ssh
gz2.domain zfs receive -Fv zones/561b686e-3119-4ab0-932e-20fc944fb001-disk1

[root@gz1 ~]# zfs send -i zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend zones/cores/561b686e-3119-4ab0-932e-20fc944fb001@tosend-last | ssh gz2.domain zfs receive -v zones/cores/561b686e-3119-4ab0-932e-20fc944fb001

Some last operations.
Grab the line related to the VM from inside the /etc/zones/index in the source global zone, and paste it at the end of same file in the destination global zone.

[root@gz1 ~]# cat /etc/zones/index | grep 561b686e-3119-4ab0-932e-20fc944fb001


[root@gz2 ~]# echo "561b686e-3119-4ab0-932e-20fc944fb001:installed:/zones/561b686e-3119-4ab0-932e-20fc944fb001:561b686e-3119-4ab0-932e-20fc944fb001" >> /etc/zones/index

Finally, copy the xml configuration file from the source global zone, to the destination one.


[root@gz1 ~]#scp /etc/zones/561b686e-3119-4ab0-932e-20fc944fb001.xml gz2.domain:/etc/zones/561b686e-3119-4ab0-932e-20fc944fb001.xml 

At this point, you can boot the VM on the destination global zone, check if all is working as expected, then delete the old VM from the source global zone.

Wednesday, September 10, 2014

smartos pxe: operation not permitted

I have downloaded the latest SmartOS image (20140904T175324Z) suitable for PXE booting.
Unpacking such file, platform-20140904T175324Z.tgz, and following the steps like the ones described here, http://wiki.smartos.org/display/DOC/PXE+Booting+SmartOS, I stumbled in an error that prevent the server to boot. This problem had never happened with previous releases.

Operation not permitted (http://ipxe.org/410c613c)
Could not boot image: Operation not permitted (http://ipxe.org/410c613c)


Visiting the proposed link, and looking at Apache log file (I've configured iPXE to download the images via HTTP) I found where the problem was.

 [Wed Sep 10 xx:xx:xx 2014] [error] [client 192.168.56.123] (13)Permission denied: file permissions deny server access: /srv/tftp/images/smartos/20140904T175324Z/platform/i86pc/amd64/boot_archive


Untarring the image file, the permission on the boot_archive file was 600 and not 644 as expected, and as it has been up to now.

Thursday, July 31, 2014

"Bundled JRE is not binary compatible with host OS/Arch or it is corrupt. Testing bundled JRE failed."

I was installing SUN STORAGE COMMON ARRAY MANAGER SOFTWARE 6.9, LINUX on a Centos 6 installation.
Such Centos 6 server was installed using the SmartOS image.

Running the script

./HostSoftwareCD_6.9.0.16/RunMe.bin -c

I got
 
"Bundled JRE is not binary compatible with host OS/Arch or it is 
corrupt.  Testing bundled JRE failed."

I've solved in that way:

yum install glibc.i686

Friday, May 16, 2014

vmadm destroy doesn't tell logadm to stop rotating logs #319

https://github.com/joyent/smartos-live/issues/319


logadm -n

logadm -n 2>&1|awk '{print $3}' |sed -e 's/://g' | xargs logadm -r

Monday, April 28, 2014

Apache Directory Studio "A fatal error has been detected by the Java Runtime Environment"

Ubuntu 14.04 64 bit
oracle-java7

Apache Directory Studio (ApacheDirectoryStudio-linux-x86_64-2.0.0.v20130628.tar.gz)

When I start the program from the command line (./ApacheDirectoryStudio), I get an error like this:

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f518c1502a1, pid=18722, tid=139989739792128
#
# JRE version: Java(TM) SE Runtime Environment (7.0_55-b13) (build 1.7.0_55-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  [libsoup-2.4.so.1+0x6c2a1]  soup_session_feature_detach+0x11
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /opt/ApacheDirectoryStudio-linux-x86_64-2.0.0.v20130628/hs_err_pid18722.log
[thread 139988370728704 also had an error]
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#



To solve this issue, try to put this line

org.eclipse.swt.browser.DefaultType=mozilla

at the end of this file

configuration/config.ini

Tuesday, April 1, 2014

install gparted inside smartos Centos VM

Well. I'm not so virtuous to use parted inside a Centos VM to expand the filesystem.
By default the Centos Linux image dataset from SmartOS is large 10GB.
You can add another disk (it will be mounted in /data).
Or you can grow the ZFS volume related to disk0.

So, from the global zone grow the volume related to the Linux VM:

zfs set volsize=20G zones/<UUID>-disk0

I don't know, maybe you must stop the VM and start it again.

Now, from inside the VM, yes, you must install various rpms.
Firts of all the EPEL repository.

rpm --import https://fedoraproject.org/static/0608B895.txt
rpm -ivh https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum install gparted
yum install xauth
yum install dejavu-lgc-sans-fonts.noarch

Now you can connect to the VM via ssh with the -X option (or -Y) and run gparted

Wednesday, March 19, 2014

Openindiana: "tree connect failed: syserr = No such file or directory"

I was trying to mount a CIFS (Samba) share from an Openindiana host,
 
-bash-4.0# mount -F smbfs //user:password@networkserver.my.domain/share /mnt/samba

I got this error

mount: //networkserver.my.domain/share: tree connect failed: syserr = No such file or directory

To solve this, I had to restart the samba client service.

svcadm restart svc:/network/smb/client:default

Wednesday, February 26, 2014

Fifo: vm reported as "provisioning", but it is false

Fifo: https://project-fifo.net

Why a VM status is reported as "Provisioning (Running)" even if it is already provisioned, and it is (only) running?




<killfill> that's because fifo knows if a vm is provisioning if the file 'provisioning' exists.
<killfill> Probably when creating the VM, the tools didn't wipe that file
<killfill> or maybe because you created it manually.. :P
<killfill> it's in /zones/:uuid/root/svc/ something like that.
<killfill> /zones/<uuid>/root/var/svc/provisioning


So, rm /zones/<uuid>/root/var/svc/provisioning
Et voilà.

Friday, February 21, 2014

SmartOS: add (modify) the gateway to a zone

I often forget to add the gateway to a new created zone.

So, get the nic MAC address.

vmadm get zone-uuid | json -a nics.0.mac

52:d9:d7:ab:c7:56

Create a file, like /var/tmp/updategw.json

{
   "update_nics": [
      {
         "mac": "52:d9:d7:ab:c7:56",
         "gateway": "192.168.0.1"
      }
   ]
}


Stop the zone, then issue an update.

vmadm update zone-uuid < /var/tmp/updategw.json

Verify.

vmadm get <zone-uuid> | json -a nics.0

Start the zone.


Monday, February 3, 2014

cisco vpn client and windows 8

So, the Cisco VPN client is not compatible with Windows 8. At least you must change various registry keys (I don't know).

A valid alternative is Shrew Soft VPNCLIENT

https://www.shrew.net/download/vpn

The Standard installation is free to use even for commercial use.

Map sd names to Solaris disk names

Map sd names to Solaris (SmartOS, Openindiana) disk names

paste -d= <(iostat -x | awk 'NR>2{print $1}') <(iostat -nx | awk 'NR>2{print "/dev/dsk/"$11}')

https://stackoverflow.com/questions/555427/map-sd-sdd-names-to-solaris-disk-names

watchman under SmartOS

"A file watching service.
Purpose
Watchman exists to watch files and record when they actually change. It can also trigger actions (such as rebuilding assets) when matching files change."
https://github.com/facebook/watchman

You can find pkgsrc definitions, useful to build the package under SmartOS (pkgsrc on SmartOS - zone creation and basic builds) and a binary package to be installed using pkg_add

pkg_add watchman-2.9.1nb2.tgz

This is the link: https://github.com/alcir/watchman-pkgsrc