I’m currently setting up some new systems in a remote datacenter, and ran into random write failures on an NFS server I just set up. Surprisingly, the issue appeared to be with DNS resolution of the client’s hostname, which makes perfect sense when you realize that we are limiting write access to specific hostnames.
Perforce has introduced a rather neat utility called p4dctl that assists with managing the various services (p4d, p4p, p4broker) that you might want to run on a linux machine. They even provide deb and rpm packages for it. Unfortunately, their RPM contains an init.d script that is incompatible with systemd, which is what comes with RHEL/CentOS >= 7. This may also be an issue in modern Debian based systems, but I haven’t tested it.
When trying to upgrade our build infrastructure to use Microsoft’s new Visual Studio 2017, I got a strange request from a developer: They needed include paths from an SDK called “ScopeCppSDK”, which they claimed that this was installed with VS2017 within it’s installation folder, and that this was the new location for some standard libraries.
I do a lot of Perforce automation, both manually using scripts, and using Jenkins plugins like the p4-plugin and perforce-plugin, the latter of which I supported for many years. All of these methods (usually) use perforce tickets so it doesn’t pass the password around plaintext.
I recently ran into this issue while trying to mount an NFS export on a linux server on an AIX server.
I understand why SELinux exists, but I still hate working with it sometimes… You’d think that after years of running into bizarre issues with nonsensical error messages I’d learn to check SELinux first, but apparently I’m a slow learner.
I recently ran into an issue (on Christmas Day, in fact) with certain binaries being unable to access files over NFS.
Super quick update: I just migrated off of wordpress and onto Jekyll hosted on GitHub Pages!
One of my users recently ran into a situation in which they couldn’t open a file for editing because perforce thought the file was already opened on their client:
During a server move relatively recently, we decided to go about it by setting up a replica in the new site, then switching it to a master once everything has been replicated over.
This was a pain. Our backup software was unable to read certain Perforce versioned files from the P4ROOT (mostly .gz files). Obviously this is a fairly big issue, as it prevented us from creating any useful backups.
Twice this month I’ve run into the situation where one of our db.* files got corrupted, and started throwing “BTree is corrupt!” errors on various commands. The first instance was caused by our system running out of main memory, and reaping p4d processes while they were writing to the database tables. The second instance was caused by exhausted disk space.
I’m currently in the process of resizing a partition in Solaris 10. So far, the instructions that I’ve found have been quite incorrect, so I’m documenting the steps I’m taking here. In my particular case, I’m resizing a single partition on a non-root disk after increasing it’s size through VMWare.
Ran into this little gem today… I’ve got a project that I’m currently trying to branch. It’s a very simple integration operation from one location to another, nothing crazy. Of course Perforce needs to make it as complicated as possible by throwing me this error for a couple of our files:
I recently did a migration of source code from one perforce server to another. During the merge process, I ran into an odd warning:
perfmerge++ warning: No mapping for change 293174 in database /x/sourceserver/server/. Rejecting.
Jenkins makes adding slaves really easy. For unix machines, the preferred method is to use SSHD, which only requires that the master be able to contact the slave. On windows, however, you are stuck using a JNLP based solution, which requires that the slave be able to access master over the network.
Here’s a script I whipped up in order to send an email to all recent Perforce users. I needed this because my company uses a shared license, so all user accounts are shared between our perforce servers. When a server needs to go down for maintenance, I like to email only those people who actually use it. This uses python, but it does not require the p4python api (though I imagine it would be quite simpler using it). I’m sure there are some imports that need to be cleaned up, but whatever, it’s just a quickie.
I often find myself in the situation where I need to delete the perforce account of a user who has left the company. Unfortunately I’m usually unable to do so because they have left files open, resulting in this error:
I recently upgraded my work PC from Natty to Oneiric, and discovered that whenever I had a window open that wasn’t maximized, it’s backend process and xorg would use up 100% of the cpu.
Yes, I know there is a handy and rather well thought out guide here, but I thought I would post my experiences, especially given that blindly following the aforementioned tutorial resulted in my existing Ubuntu installation being wiped… First, go read the tutorial so you know what I’m talking about. Essentially you will need to make room on your hard disk for the C-STATE and C-ROOT ChromiumOS partitions, create them, and copy them from your USB stick using dd.
So I ran into this problem earlier this week. Basically we have a Solaris 10 server hosting files over NFS. The NFS server that comes with Solaris 10 supports NFSv4, but doesn’t seem to include idmapd, which is responsible for mapping user and group ids. Everything I’ve read suggests that idmapd is required on both the client and server in order for it to work correctly. Since I had no real desire to screw with the server configuration (and since other machines could mount it correctly) I kept searching.
Specifically, I’m attempting today to deploy the Perforce P4Java api to the Sonatype core Maven repository. Sonatype has plenty of documentation on this process, but most of them make the assumption that you are building the artifacts from source, and that you are already familiar with maven. I’m a maven newbie and I obviously don’t have the source. Here’s a quick rundown of how I did. My primary resources was actually this page (A shout out to Brian Jackson for pointing me in the right direction!)
So I’ve got this ridiculous task ahead of me… I somehow need to call a Spring Bean data access object (DAO) from InstallAnywhere using groovy scripting. The idea is that the webapp being installed has all the code in order to set up the database, but there are certain things that need to be done by the installer. Part of getting this working is dynamically loading jars from a directory, and using reflection to grab the class and method I need to call… Instead of duplicating code and having to maintain it in multiple places, I asked the dev team to provide me a singular class member function that I can call externally. Needless to say, they failed to deliver, and blindly pointed me towards the spring DAOs they were using to access the database. This requires a whole bunch of additional setup in order to get working, since it was written to work in the webapp context.
I ran into this obsurd issue today. Apparently Suse 11 zLinux doesn’t handle partition creation properly on DASD disks. When it gets to actually running the
fdasd command (zLinux version of
fdisk, as near as I can tell) it simply hangs there forever.
Just ran into this issue while setting up a CentOS 4.7 test virtual machine for some debugging work. After some digging around on the webernets, I found this little gem.
This recently came up in the hudson users mailing list, so I figured I’d post it here.
What follows is a chronicle of my efforts in getting Suse SLES 11 installed in Hercules. Hopefully it will be of use to others who need to deal with this ridiculous 31 bit platform.