Broken Pipe Error
Nov 13, 2007Fresh install of centos 5 / cpanel
I get this when ssh'ing in default port 22 / root
-bash: mail: command not found
-bash: echo: write error: Broken pipe
Fresh install of centos 5 / cpanel
I get this when ssh'ing in default port 22 / root
-bash: mail: command not found
-bash: echo: write error: Broken pipe
i have Made a VPs on my Own dedicated Server Which i use to run TorrentFlux for Personal Use.
I am facing a few problems and dont know where to askf or help.
when i start more than about 12, i get errors in SSh (if i llogin) or th4e Apache Restarts killing all the Transfers.
I ahve 2 Gb Ram, Dual Core CPU.
the Error Via SSh is:
sh: pipe error: Too many open files in system
and i ahve attached a Errors Log From Apache.
i am a Noob in Servers so i ahve Lxadmin Contorl Panel Installed and the Log is generated by it.
FDC Co-location packages include a $600/month package for a 1gp pipe
[url]
It's shared bandwidth, so does anyone know how fast it really is?
I'm using CPanel with Exim running on my server.
I need to add a new e-mail account, say "service@domain.com". Messages sent to this address should arrive to different e-mail accounts depending on sender evaluation.
So, if a message is received by "service@domain.com" from "user@mail.com" I need to check on my database if the sender's e-mail address already exists and if it's assigned to some of the operators, then pipe the message to the operator's e-mail address, else, the message would be sent to a randomly chosen operator.
I have created a forward on CPanel, where "service@domain.com" points to "|/home/user/pipe.pl" where pipe.pl is the file i have created to perform the database lookup.
Now I'm stuck on the way I should return the pipe_address to Exim.
ive read that apache can pipe log files into a PHP script. ive tried it and its not working.
ive tried 2 ways, one pipes to the script file (which says to load with PHP.exe on the top line). the other way pipes to PHP.exe and tells it to load the script.
i put a simple bit in my script to create a new file just to check if it worked and it looks like the script is never being run yet i dont get any errors from apache.
I am review the pricing model for web hosting.
I have come up with a basic model (page views x bandwidth used x disk space). I will get the bandwidth and page views from awstats and the disk space from the server.
I am trying to cost the bandwidth we buy from our supplier. We have a 10mb pipe going into our solution costing us £14k per year. Awstats reports the bandwidth as total data transfered e.g. 2.4 gb per day or month. I am having a problem trying to link the two.
How can I work out how much data I can get through a 10mb connection in a full day? I can then work out a sites bandwidth usage based on the % of the cost. Ideally i would like to be able to say something like 'with 10mb pipe our solution can handle 250gb of traffic per day'.
What do people think of the costing model and any ideas of how i can calculate the total amount of data i can shift in a day.
where the pipe command gets added to in Plesk for WHMCS?
View 0 Replies View RelatedI'm running CentOS with Paralells Plesk bundled Paralellls Premium Antivirus (Dr Web). After the latest yum updates DrWeb continously seems to crash and be restarted by the Parallells watchdog. By default there were no logs for DrWeb, but when I enable logging to a file it gets spammed continously with the following error:
Cannot create pipe for communication with scanning childs (Too many open files)and the Drweb process runs at 99% CPU for long periods. This totally fills the disk with logs and I've now disabled logging again and Drweb is back to continously being restarted by the watchdog.
I have installed Plesk 11.x windows version. I use that for 1 month but now I want to login to panel by http://mysite.com:8880 get error:
The system cannot find the file specified. (Error code 2) at Unable to connect to pipe .pipeP_85da9518-b79d-49a6-a154-e5055dc53d7c
I have waisted all not trying to get subversion working for apache 2.2.6 and in the process i have broken yum now to.
I did yum erase subversion, and now with every yum command i get:
Code:
(process:27490): GLib-CRITICAL **: file gtimer.c: line 106 (g_timer_stop): assertion `timer != NULL' failed
(process:27490): GLib-CRITICAL **: file gtimer.c: line 88 (g_timer_destroy): assertion `timer != NULL' failed
Traceback (most recent call last):
File "/usr/bin/yum", line 29, in ?
yummain.main(sys.argv[1:])
File "/usr/share/yum-cli/yummain.py", line 97, in main
result, resultmsgs = do()
File "/usr/share/yum-cli/cli.py", line 512, in doCommands
ypl = self.returnPkgLists()
File "/usr/share/yum-cli/cli.py", line 1176, in returnPkgLists
ypl = self.doPackageLists(pkgnarrow=pkgnarrow)
File "__init__.py", line 885, in doPackageLists
File "/usr/share/yum-cli/cli.py", line 75, in doRepoSetup
self.doSackSetup(thisrepo=thisrepo)
File "__init__.py", line 260, in doSackSetup
File "repos.py", line 277, in populateSack
File "/usr/lib/python2.3/site-packages/sqlitecachec.py", line 40, in getPrimary
self.repoid))
TypeError: Can not create index on requires table: near "NOT": syntax error
I saw this too:
Code:
/sbin/ldconfig: /usr/lib/mysql/libmysqlclient.so.15 is not a symbolic link
I tried uninstalling and reinstall yum but that did not work.
I have a problem with the traceroute for 72.36.229.84 . Some useres can reatch it, but not all.
I 've attaced 3 routes under that isn't working and 2 that work. Look at the routes that doesn't work, they all stop by gblx.net ( String 9 - 10) Working routes doesn't use gblex.net
My question : What should i do, what can i do ? Is gblex.net the problem ?
Not working route : emil@egenhost:~$ ping 72.36.229.84
PING 72.36.229.84 (72.36.229.84) 56(84) bytes of data...
I don't know how it got broken, it was working a couple days ago. When users click the fantastico icon in their cpanels, a page opens that says this:
Fantastico is not installed at the default location /usr/local/cpanel/3rdparty/fantastico. Either move the Fantastico directory from it's current location to /usr/local/cpanel/3rdparty/fantastico OR enable ioncube loaders in WHM -> Tweak settings.
I enabled the ioncube loaders and that didn't fix it. So then I reinstalled fantastico and it's still broken.
How do I troubleshoot?
I ran "yum update" on one of my servers, and it must've updated BIND, because now named doesn't start.
I basically hit all the problems in this thread:
[url]
This is CentOS4 with Plesk.
Even though I don't have that package installed, and tried every suggestion there, it still doesn't start... I mucked with the configs and moved so many files I don't know how to get back to where I started.
Quote:
Jul 24 05:08:06 www named: /etc/named.conf:67: open: /etc/rndc.key: file not found
What's my best bet for fixing this mess? I sent in an e-mail to two "server administration" companies I found in signatures here, hopefully one of them will be available today.
I changed the nameservers on critical domains to a free DNS service to get them back online, but they're acting oddly (like DB timeouts), perhaps because of the lack of a local nameserver to talk to.
But in the meantime is there anything I can do to try to fix this quick?
I'm trying to install some basic cpan modules to use ASSP with. On my other server, this all went without a hitch, but on this one (Centos), it it a pain. First off, every time I start up cpan, it's asking me to do some sort of reconfiguration..every time (running as root)...
Secondly, after I'm in the cpan shell, and I issue this:
install Compress::Zlib
I get a whole slew of errors, and ultimately, nothing is done...Here is the last portion of what is shown on the screen
Quote:
Global symbol "$gzerrno" requires explicit package name at t/14gzopen.t line 481.
Global symbol "$gzerrno" requires explicit package name at t/14gzopen.t line 499.
Global symbol "$gzerrno" requires explicit package name at t/14gzopen.t line 502.
Bareword "Compress::Zlib::zlib_version" not allowed while "strict subs" in use at t/14gzopen.t line 38.
Bareword "ZLIB_VERSION" not allowed while "strict subs" in use at t/14gzopen.t line 38.
Bareword "Z_FINISH" not allowed while "strict subs" in use at t/14gzopen.t line 98.
Bareword "Z_STREAM_END" not allowed while "strict subs" in use at t/14gzopen.t line 110.
Bareword "Z_STREAM_END" not allowed while "strict subs" in use at t/14gzopen.t line 111.
Bareword "Z_STREAM_ERROR" not allowed while "strict subs" in use at t/14gzopen.t line 446.
Bareword "Z_STREAM_ERROR" not allowed while "strict subs" in use at t/14gzopen.t line 461.
Execution of t/14gzopen.t aborted due to compilation errors.
# Looks like you planned 217 tests but only ran 2.
# Looks like your test died just after 2.
t/14gzopen......dubious
Test returned status 255 (wstat 65280, 0xff00)
DIED. FAILED tests 1-217
Failed 217/217 tests, 0.00% okay
t/99pod.........skipped
all skipped: Test:od 1.00 required for testing POD
Failed Test Stat Wstat Total Fail List of Failed
-------------------------------------------------------------------------------
t/000prereq.t 4 1024 6 4 1 3-4 6
t/01version.t 255 65280 2 3 1-2
t/03zlib-v1.t 255 65280 394 785 1-394
t/05examples.t 255 65280 ?? ?? ??
t/06gzsetp.t 255 65280 ?? ?? ??
t/08encoding.t 255 65280 16 31 1-16
t/14gzopen.t 255 65280 217 432 1-217
1 test skipped.
Failed 7/8 test scripts. 633/635 subtests failed.
Files=8, Tests=635, 1 wallclock secs ( 0.72 cusr + 0.27 csys = 0.99 CPU)
Failed 7/8 test programs. 633/635 subtests failed.
make: *** [test_dynamic] Error 255
PMQS/Compress-Zlib-2.004.tar.gz
/usr/bin/make test -- NOT OK
Running make install
make test had returned bad status, won't install without force
Failed during this command:
PMQS/Compress-Raw-Zlib-2.004.tar.gz : make NO
PMQS/IO-Compress-Zlib-2.004.tar.gz : make_test NO
PMQS/Compress-Zlib-2.004.tar.gz : make_test NO
While creating a domain on Ensim I got this error:
Code:
Field Disk Quota (WARNING): Disk quota is either not enabled or not supported for virtual domain filesystems.
diskquota - Reconfigure service
(WARNING):Group quota is not enabled on the server. Not configuring quota for the site. Please fix your server's quota problem and edit the site again if you want to configure quota for the site
I'm using RHES 3 + Ensim Pro 4.1.0-8, i know maybe an
option would be upgrade to Ensim Pro X, but right now I can not do that, i need alternative solutions..
I got a trouble with a server upgrade on GoDaddy.
Once the server was upgraded from Apache 2.2 to Apache 2.4 all
Code:
# cat /root/.autoinstaller/microupdates.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<patches>
<product id="plesk" version="11.0.9" installed-at="20121209T212320">
<patch version="62" timestamp="" installed-at="20140723T035123" />
</product>
</patches>
Minutes after this microupdate auto-installed, I am unable to login to POP and IMAP on this server. None of the accounts are able to login. The logs indicate hundreds of login failures, accompanying a new record I've never seen in these logs:
Code:
pop3d-ssl: Unexpected SSL connection shutdown.
Unfortunately, I can't find anything about what MU#62 actually included. The changelog only indicates up to MU#61, so I'm very concerned:URL....
I've googled the specific error I'm seeing above, but nothing appears to be remotely related to this problem.Is it possible to roll back the MU to determine if that is in fact the problem?
We had someone take a look at our server for security issues, and they believed that one way of keeping "hackers at bay" would be to write an sh wrapper that only allowed root access. Well, this immediately broke the system.
I have spent about six hours now attempting to fix this. We're using Red Hat Enterprise 4, and the recovery disc doesn't have the SCSI drivers we need to even mount the hard drive, so that's out. I can't skip initrd because it apparently contains the drivers for the hard drive, but it's also not allowing the system to boot up. I have tried setting kernel emergency in grub, but it still loads initrd and breaks. Without initrd, I get a kernel panic while mounting the hard drive. Annoyingly enough, doing cat initrd-version.img at the grub prompt shows the file just fine, so why is initrd required for mounting the drive if it's already accessible? Also, it doesn't look like there's any way to edit initrd from grub, which is a real pain in the head because I could always comment out the offending lines. And of course, even though I can see the initrd file, I can't do anything else with the file system. I have requested of the data center to provide a boot disc that can mount the hard drive, and if that can be done, then I can go in and make the changes immediately and hope that it works, but without that, it seems I'm stuck. Any suggestions?
When I say "broken", what happens is I get "Out of memory: Killed process x (procname)" which then repeats itself infinitely, killing any process which would load.
The data center says that they HAVE no boot discs capable of mounting the hard drive. This is utterly ridiculous, because the only option from here is to order the expensive OS reload. I'll never use Red Hat again.
My server (Fedora Core V and Plesk 8) hard disk broke 2 days ago and my bakckup tar.gz is too old.
Datacenter (fdcservers.net) tried to put old harddisk as slave but server is not recognising the old drive. Datacenter say that can not do anything more.
My question:
Is there any software or company that can recovery my harddisk data?
My last 6 months work is gone now.
Following the update from 11 to Plesk 12 (and installing latest Ubuntu fixes), the Plesk panel access is broken. Welcome page says:Can not load key: key is empty.And of course, no way to login the mgt interface. All services appear up, so this is only affecting the panel web service.
Fishing out the panel.log, it appears to be at a lower level than I initially thought (see below): experiencing issues after updating from 11 to 12 on Ubuntu (12)?
[18-May-2015 19:46:34 Europe/Berlin] PHP Warning: file_get_contents(/opt/psa/var/sso.sp.pem): failed to open stream: No such file or directory; File: /opt/psa/admin/externals/xmlseclibs.php, Line: 285
[18-May-2015 19:46:34 Europe/Berlin] Exception: PHP Warning: file_get_contents(/opt/psa/var/sso.sp.pem): failed to open stream: No such file or directory; File: /opt/psa/admin/externals/xmlseclibs.php, Line: 285
file: /opt/psa/admin/plib/Smb/Exception/Syntax.php
line: 56
[code]..
httpd fails to start after bootstrap. Log file below.
Warning: web server configuration is broken. We will try to repair it. This operation can take a lot of time, please do not interrupt the process.
Unable to rebuild web server configuration, possible there are broken domains
Trying to reconfigure web-server configurations skipping broken domains... Execution failed.
Command: httpdmng
Arguments: Array
(
[0] => --reconfigure-server
[1] => -no-restart
[2] => -service-node
[3] => local
)
[Code] ....
System:
Parallels Plesk v12.0.18_build1200140610.21 os_Ubuntu 14.04
Symptoms:
Whenever I click "File Sharing" in Plesk, I get the following error and no content is visible:
Internal error: Error in cURL request: IPv6 numerical address used in URL without bracketsClick to expand...
Updates ran over the weekend and I can no longer login to the Plesk Panel. I'm getting the following PHP error: Fatal error: Call to undefined function get_gpc() in D:Plesadminhtdocsindex.php on line 3Click to expand...
View 1 Replies View RelatedUnfortunately the encrypted FTP transfer is broken since Plesk upgraded to version 12. There are two cases:
Passiv FTP without encryption on port 21: everthing workes fine.
Passiv FTP with explicite TLS encryption on port 21: The control connection can be established by using TLS on port 21 but the data transfer can't be established by using passive mode (ETIMEDOUT - Connection attempt timed out)
I thought it could be a firewall issue but i'm currently unable to diable the firwall for testing since the configuration of the firewall is unreachable after upgrading to plesk 12! See second thread:URL....
I've a dedicated server at ThePlanet / Servermatrix for the past few years and for the most part the service has been okay. Uptime has been good and support used to be fairly swift.
Early wednesday morning the primary hard drive in my server started dying. Throughout the day various services kept going up and down and overall the entire server was very unstable. I didn't get much movement from ThePlanet's support team - they would reboot the server, SSH and other services would come back online, and so they would close the ticket.
Thirty minutes after the reboot the HD would switch to read-only and stuff would start dying. So they finally recommended that I replace the HD and do an OS reload. I said fine as I had a backup of all of the accounts on a 2nd hard drive.
Well it took until 6am this morning for the OS reload to finally be completed, but when it was done apache was *completely* screwed up. WHM was up and running but if you went to the server IP address in the browser you got an error.
It turns out that something really badly went wrong with the OS reload but it took them hours before they even admitted that there was something wrong that needed more action. It's now 10pm and while email and other services are up, apache is still nonexistent.
When I try to run easyapache it barely starts before it errors out with a bunch of missing dependencies. I cannot instal GD and a bunch of other items, and I keep getting error messages that SSL isn't installed either.
Please visit [url] for help with this error.!
No original working apache backup to restore!
Executing '/scripts/initfpsuexec'!
Executing '/scripts/initsslhttpd'!
Compiling report...
Sending report (6304 bytes)...
If you want to create a support ticket with cPanel regarding this please reference 'BuildAP Report Id': '741873'!
Report processed.
Verbose logfile is at '/usr/local/cpanel/logs/easy/apache/build.1212079281'
----
seems the yum repo being used has bad files:
Error: Missing Dependency: zlib = 1.2.3-0 is needed by package zlib-devel
Error: Missing Dependency: libjpeg = 6b-0 is needed by package libjpeg-devel
--- -0 isn't a normal package id.
I can't even transfer my accounts off the server as that's also broken - I was going to all of the accounts off to my KnownHost VPS but I keep getting an authentication error ("sshcmdpermissiondeny") even though I'm definitely entering the correct root password.
I bought yesterday a SSL certificate from Comodo (PositiveSsl).
I have three files to install : the certificate itself, the Root certificate and the intermediate certificate.
Where can I add the intermediate certificate file ? Currently, a SSL report shows that the chains is broken (which is actually right).
My current configuration is: Ubtuntu Server 10.04, Plesk 11.5 and Roundcube 0.9.5 (installed via Plesk as the default webmail application) on Apache.
I played around with an SSL checker (https://www.ssllabs.com/ssltest/) to test my certificates and I found out that Roundcube delivered a broken certificate chain. It didn't deliver the intermediate certificate correctly. I searched through the configuration file of roundcube (/etc/apache2/plesk.conf.d/roundcube.conf) and discovered that there was only an entry for SSLCertificateFile.
To fix this I added the intermediate certificate via SSLCACertificateFile to the configuration file:
Code:
SSLCertificateFile "/opt/psa/var/certificates/cert-1sCtWB"
SSLCACertificateFile "/opt/psa/var/certificates/cert-FGLFqQ"
The only problem is that this configuration file is generated automatically:
Code:
#ATTENTION!
#
#DO NOT MODIFY THIS FILE BECAUSE IT WAS GENERATED AUTOMATICALLY,
#SO ALL YOUR CHANGES WILL BE LOST THE NEXT TIME THE FILE IS GENERATED.
All my sites using SSL are now broken and the panel says there are no certs.... I hope they are still there somewhere where I can find them.
View 6 Replies View RelatedI have 11.5.30 and no new FTP accounts work. All I get when trying to connect is 530 User cannot log in, home directory inaccessible. I have run the command:
"%plesk_cli%
epair.exe" --reconfigure-ftp-site -webspace-name site.co.uk
removed and added the user again repairing in between. I have manually tried creating the user folder under default/6/localuser/username and setting permissions. I have checked the local windows account and the home directory is fine.
All old accounts create previously work fine. I could at least connect on the master FTP site with the local user folder but then it was read only.