I didn't want to say anything bad about this Host, because the staff is very friendly, but people need to be warned incase they didn't get what they ordered either.
I ordered a server with a hotswappable RAID, and got a the welcome letter a few days later. The welcome letter didn't have my root password or my IP address range. So, I emailed asking for that. Then I started wondering if they missed anything else because the welcome letter nor their client area lists the server specs. So I asked if they have my the correct hardware.
They said No, and will give me a monthly credit and the correct hardware for the inconvience. It was a new server and wasn't a big deal to me.
But then I wondered if they did the same to the server I ordered like 40 days ago. So I asked and they said they messed up my hardware on that order too. That server is already up and running with sites on it.
The host is AxisHost , I know many people here recommend them, thats why I chose them, but this seems like it may be happening a lot with them.
[1;33mChecking rkhunter version... [0;39m This version : 1.3.2 Latest version: 1.3.2 [ Rootkit Hunter version 1.3.2 ]
[1;33mChecking rkhunter data files... [0;39m Checking file mirrors.dat [34C[ [1;32mNo update [0;39m ] Checking file programs_bad.dat [29C[ [1;32mNo update [0;39m ] Checking file backdoorports.dat [28C[ [1;32mNo update [0;39m ] Checking file suspscan.dat [33C[ [1;32mNo update [0;39m ] Checking file i18n/cn [38C[ [1;32mNo update [0;39m ] Checking file i18n/en [38C[ [1;32mNo update [0;39m ] Checking file i18n/zh [38C[ [1;32mNo update [0;39m ] Checking file i18n/zh.utf8 [33C[ [1;32mNo update [0;39m ] Warning: Checking for preload file [ Warning ] Warning: Found library preload file: /etc/ld.so.preload Warning: The file properties have changed: File: /bin/ps Current hash: 36f3d8a9fcaebf5838e5e55ebdcac7e355477343 Stored hash : 8f1acf237e562043f8353f4ec5d0c3490c0d0cb3 Current inode: 1228803 Stored inode: 1228857 Current size: 61364 Stored size: 67088 Current file modification time: 1214487892 Stored file modification time : 1195262225 Warning: The command '/usr/bin/GET' has been replaced by a script: /usr/bin/GET: perl script text executable Warning: The command '/usr/bin/groups' has been replaced by a script: /usr/bin/groups: Bourne shell script text executable Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne shell script text executable Warning: The file properties have changed: File: /usr/bin/top Current hash: 15f1f743d73d9546a05a15644816139de7708327 Stored hash : 5e78fb7f0a02643a91964081ca03316dbaf01bdd Current inode: 246165 Stored inode: 245920 Current size: 48536 Stored size: 48504 Current file modification time: 1214487892 Stored file modification time : 1195262225 Warning: The file properties have changed: File: /usr/bin/vmstat Current hash: 898351bc3be226caf6915715b23a1c7cc5d35fdd Stored hash : edaa64f3921a0a2d873c14a5eb641ba883f4dcff Current inode: 246561 Stored inode: 246020 Current size: 17872 Stored size: 20444 Current file modification time: 1214487892 Stored file modification time : 1195262225 Warning: The file properties have changed: File: /usr/bin/w Current hash: 480c2c2e4f1048e19fc075f4daebe79fa84e08d1 Stored hash : 87f39eeb583bc7f6622e95fd0266f093ed8b362b Current inode: 246020 Stored inode: 246167 Current size: 9720 Stored size: 11720 Current file modification time: 1214487892 Stored file modification time : 1195262225 Warning: The file properties have changed: File: /usr/bin/watch Current inode: 246167 Stored inode: 245924 Current file modification time: 1214487892 Stored file modification time : 1195262225 Warning: The command '/usr/bin/whatis' has been replaced by a script: /usr/bin/whatis: Bourne shell script text executable Warning: The command '/sbin/ifdown' has been replaced by a script: /sbin/ifdown: Bourne-Again shell script text executable Warning: The command '/sbin/ifup' has been replaced by a script: /sbin/ifup: Bourne-Again shell script text executable Warning: The file properties have changed: File: /sbin/sysctl Current hash: b560099caf18d28bcc0249efaec75dcddb87b219 Stored hash : fa13202ac5897d9f7198e8afbbe7d0c835b07639 Current inode: 589893 Stored inode: 589875 Current size: 9144 Stored size: 11048 Current file modification time: 1214487892 Stored file modification time : 1195262225
I know some of these warnings like /usr/bin/GET - groups -ldd - whatis - ifdown – ifup are normal false positives.
But other warnings are new,
I think they changed after upgrading the cpanel to 11.23 I have cpanel on centos 4.6
After few days of not getting even "one" spam out of average 70 messages a day...the company manager got angry! He thought the mail system is down.
Our provider - a VDS subscriber who rent us shared plan for a website - said: we changed nothing. I confirmed: did you install any new anti-spam software on the site...they said: NO.
In a next conversation with the company manager, they told him they installed a new anti-spam software on the whole server and cannot make it off for our site specifically.
The problem is, they already had a kind of anti-spam software which only "marks" spam-like messages with "**SPAM**" in the subject line, and doesn't mark others. I am afraid some customers' emails get marked and deleted with this new software they claim implemented, based on false positives.
Many times I get my emails in Yahoo and Hotmail go to Spam/Junk folder for some not-widely known reasons, like sending to myself, while putting recipients in CC, and other strange reasons like just using CC in an auto-reply!
something any non-tech person might do.
I am thinking in switching to a new host where we only host mails (mx record), so we don't have "any" email deleted without our check.
I was surprised to see that they had 100% uptime in May according to their logs. I am used to see 99.98%, 98.97% etc. etc. with other hosts. But even 100% is quite possible.
On June 7th the uptime suddenly dropped below 96% with avg. 7 outages. I was really disappoined as I was planning to signup. But after a day or so it again rose to 100% with 0 outages on all of their servers which clearly explained the 100% uptime of May.
According to them they had a attack because of which IPs of nodes had to be changed and subsequently they also changed it on hyperspin.com (their server monitoring service). I immediately signed up on hyperspin to verify this claim. Changing the IP or hostname of a monitored service on hyperspin doesn't reset its log is what i clearly observed. Its quite visible that the logs were reset intensionally to hide the actual server uptime and make it always show 100% percent. When i reverted back to them on this issue, they prefered to close the ticket. I just want to know from other hosts, it this practice common or primaryvps.com is an exception? Well as mentioned on their site, the uptime log is located at:
hyperspin.com/publicreport/30037/20077
But don't expect too much. It has only two figures 0 for outages and 100% for uptime.
Am am getting several "iptables: Invalid arguments" message. I traced this to these iptables calls from within /etc/apf/firewall. Each of these iptables calls gives "iptables: Invalid arguments":
/sbin/iptables -A INPUT -i venet0 -p tcp --tcp-flags ALL NONE -j IN_SANITY
/sbin/iptables -A INPUT -i venet0 -p tcp --tcp-flags ALL FIN,URG,PSH -j IN_SANITY
/sbin/iptables -A INPUT -i venet0 -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j IN_SANITY
/sbin/iptables -A INPUT -i venet0 -p tcp --tcp-flags ALL ALL -j IN_SANITY
/sbin/iptables -A INPUT -i venet0 -p tcp --tcp-flags ALL FIN -j IN_SANITY
Any thoughts? According to my ISP, I have these iptables modules: iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc
I am looking for webhosts providing free dedicated (not shared) SSL certificates alongwith dedicated IP.
Currently I found only dotable.com and other companies in the UK2 group providing such a package. Is there any other good webhost providing free SSL certificates.
My budget is maximum $10 per month.
Only CPanel hosts are preferred and access to WHM would be a definite plus though not mandatory.
I have a chrooted ftp user that I use on my server. I would like to run a cron job using this user that backs up my mySQL databases. When I execute the job, it complains about date and mysqldump not existing. I was able to fix the date problem simply by copying it from the actual /bin to the chrooted /bin. However, I can't simply copy mysqldump because it depends on several libraries. Anybody know how I can give this chrooted user access to commands that aren't in his chroot?
I don't really know much about how mod_security rules work, I've just clicked on default configuration in WHM.
Anyway one user on our vbulletin board has pmed me saying he can't access the board. He gave me his fixed ip. And I noticed it is in CSF denied ip list as:
lfd: 5 (mod_security) login failures from xx.xx.... I've checked mod_security log and it has like twenty entries for this ip saying: ....
I've had this virtual server with Godaddy forever. I have about 52 websites on it. I was adding another and I walked through the registration process and forgot to click the save button. I then proceeded to the domain area of Godaddy to update the IP address. Now I can't add domains, delete dns settings, or update DNS settings without getting these kinds of messages:
Your URL: /dns/sync.do
Error details:
CommandFailedException: Unable to parse DNS configuration file at c.g.t.f.systems.dns.LinuxDnsSubsystem.updateConfigFile:1165 at c.g.t.f.systems.dns.LinuxDnsSubsystem.synchronizeDomains:958 at c.g.t.w.actions.dns.ActionDnsSync.process:39 at c.g.t.w.actions.AbstractSpringAction.execute:118 ... at c.g.t.w.filters.AuthorizedResourceFilter.doFilter:38 ... at c.g.t.w.filters.RequestPopulationFilter.doFilter:117 ...
Cause: SAXParseException: Premature end of file. ... at c.g.t.f.systems.dns.LinuxDnsSubsystem.updateConfigFile:982 at c.g.t.f.systems.dns.LinuxDnsSubsystem.synchronizeDomains:958 at c.g.t.w.actions.dns.ActionDnsSync.process:39 at c.g.t.w.actions.AbstractSpringAction.execute:118 ... at c.g.t.w.filters.AuthorizedResourceFilter.doFilter:38 ... at c.g.t.w.filters.RequestPopulationFilter.doFilter:117 ...
So! Godaddy has now had me upgrade my disk space to another 10gig, I've tunneled in with ssh and ran memhog to increase memory because they are suggesting that I'm too low on RAM and want me to purchase another server, I can't just ugrade. I've updated all the packages in my simple control panel. I finally got an email from them after begging for help, because $75 an hour is just too much for me!
This is what I got:
Dear Sir/Madam,
Thank you for contacting Server Support.
If this issue started after the modifications to the DNS of a domain, then it is likely that the DNS file or configuration has become corrupted. You can attempt to manually update or recreate the DNS file via SSH. Unfortunately, we are unable to provide assistance with the configuration of the server or modification of files. While you do have a lot of domain names added to the server, a backup of the content and reprovision of the server will reset the server to the default settings. However, this will remove all content and require that you re-add all domains and content once again.
I have backed up all my sites and databases, but to start all over again is a horrific thought. I can't imagine the nightmare.
I tried a little searching both in Google and on here but this is probably going to be a private or member-based thing anyway.
I've gotten a couple comments that some of my outbound personal mail is ending up in spam folders. I think it's almost completely limited to having Outlook (or Express?) as a client... which I assume doesn't even do network-based lookups. Nonetheless I don't seem to be on any blacklists, and running it through my own spamassassin filter comes up basically zero score. But the fact that more than one person has had the problem concerns me greatly. Also, I haven't seen any significant reason why the content itself would trigger anything.
I realize a public spam test service would basically be a "testing ground" for spammers to evade detection, but there's obviously legitimate uses as well... is there such a tool somewhere? Thanks for any advice. Public information sharing is key to a forum, but PMs are welcome in this case.
Being using the free version of siteuptime.com but this month alone i have had so far 3 false alerts reported, i am always in the putty and does not get interrupted, and i am sure i am around on the site too while supposedly siteuptime reports down (i am guessing firewall blocks their server ip for w/e reason). So am wondering is there any really good public tracking site out there which can do public reports more accurately and not kak out on me?
Whats your experience with siteuptime for monitoring if any ? Sometimes it works "ok" other times its just off the charts.
Every time I log on plesk 11.09 I get an email from admin saying that due to maximum number of failed login attempts for admin, the account was blocked for 30 minutes.
First, I do not get failed login attempts, I log in every time.
Two, the account is not blocked, I can log in, out and back in as many times as I want without problem except that I get this email everytime.
1) I use DNSMadeEasy for a couple of my important domains so I can utilize their failover service.
2) I use my own nameservers for everyone else.
At my register (GoDaddy) I've added host entries to my domain (let's call it host.com) for ALL of my nameservers: DNSMadeEasy and mine. For example here are my host entries:
At the register I've then configured host.com to use the first five nameservers for itself, the DNSMadeEasy nameservers.
For less critical sites that I host I simply point them to ns1.host.com and ns2.host.com, my nameservers.
Now, here's the twist. If I use dig to look up www.host.com I get:
[root@lax1 ~]# dig +trace www.host.com
; <<>> DiG 9.3.3rc2 <<>> +trace www.host.com ;; global options: printcmd . 220048 IN NS D.ROOT-SERVERS.NET. ........................................... . 220048 IN NS K.ROOT-SERVERS.NET. ;; Received 228 bytes from 66.63.160.2#53(66.63.160.2) in 1 ms
net. 172800 IN NS J.GTLD-SERVERS.net. ........................................... net. 172800 IN NS G.GTLD-SERVERS.net. ;; Received 497 bytes from 128.8.10.90#53(D.ROOT-SERVERS.NET) in 74 ms
host.com. 172800 IN NS nsdme0.host.com. host.com. 172800 IN NS nsdme1.host.com. host.com. 172800 IN NS nsdme2.host.com. host.com. 172800 IN NS nsdme3.host.com. host.com. 172800 IN NS nsdme4.host.com. ;; Received 225 bytes from 192.48.79.30#53(J.GTLD-SERVERS.net) in 125 ms
www.host.com. 1800 IN CNAME host.com. host.com. 75 IN A 60.55.55.55 host.com. 86400 IN NS nsdme2.host.com. host.com. 86400 IN NS nsdme1.host.com. host.com. 86400 IN NS nsdme5.host.com. host.com. 86400 IN NS nsdme0.host.com. host.com. 86400 IN NS nsdme4.host.com. host.com. 86400 IN NS nsdme3.host.com. ;; Received 276 bytes from 123.123.123.123#53(nsdme0.host.com) in 68 ms BUT, if I lookup the nameserver (ns1.host.com) I get:
Code: [root@lax1 ~]# dig +trace ns1.host.com
; <<>> DiG 9.3.3rc2 <<>> +trace ns1.host.com ;; global options: printcmd . 218964 IN NS M.ROOT-SERVERS.NET. ........................................... . 218964 IN NS K.ROOT-SERVERS.NET. ;; Received 228 bytes from 66.63.160.2#53(66.63.160.2) in 1 ms
net. 172800 IN NS H.GTLD-SERVERS.net. ........................................... net. 172800 IN NS G.GTLD-SERVERS.net. ;; Received 497 bytes from 202.12.27.33#53(M.ROOT-SERVERS.NET) in 115 ms
ns1.host.com. 172800 IN A 60.55.55.55 host.com. 172800 IN NS nsdme0.host.com. host.com. 172800 IN NS nsdme1.host.com. host.com. 172800 IN NS nsdme2.host.com. host.com. 172800 IN NS nsdme3.host.com. host.com. 172800 IN NS nsdme4.host.com. ;; Received 241 bytes from 192.54.112.30#53(H.GTLD-SERVERS.net) in 151 ms
What I've realized is that the actual IP addresses for nameserver host entries come from a higher level server than my own, in this case H.GTLD-SERVERS.net. I guess this makes sense but I just hadn't realized it before. It looks like I don't even need to have record entries in my DNS records for the host nameservers.
Now for the question. Can I:
1) Remove my custom host nameserver entries from my register.
2) Add entries in my DNSMadeEasy records to specify the location of ns1.host.com and ns2.host.com.
3) Use the failover provided by DNSMadeEasy to also fail-over my DNS entries for my nameservers?
I know this would require one more hop if it works but it would allow me to provide failover ability to fifty domains without having to purchase the extra domains at DNSMadeEasy.
Is there a way to page results from ls through FTP similar to the way you can in your shell by using ls | less (or ls -l | more)? When I try ls | less through FTP, even on a linux server, it wants to output to a local file?
Requests with error response codes 400 Bad Request /vb/Juice/images/editor/bold.gif: 1 Time(s) /w00tw00t.at.ISC.SANS.DFind: 1 Time(s) 404 Not Found /admin/phpmyadmin/main.php: 1 Time(s) [url] ---------------------- httpd End -------------------------
--------------------- Kernel Begin ------------------------
2 Time(s): PrefPort:A RlmtMode:Check Link State 2 Time(s): Virtual Wire compatibility mode. 2 Time(s): autonegotiation: yes 2 Time(s): duplex mode: full 2 Time(s): flowctrl: none 2 Time(s): ide0: BM-DMA at 0xfc00-0xfc07, BIOS settings: hda:pio, hdb:pio 2 Time(s): ide1: BM-DMA at 0xfc08-0xfc0f, BIOS settings: hdc:pio, hdd:pio 2 Time(s): irq moderation: disabled 2 Time(s): rx-checksum: disabled 2 Time(s): scatter-gather: disabled 2 Time(s): speed: 100 2 Time(s): tx-checksum: disabled 1 Time(s): pIII_sse : 4821.000 MB/sec 1 Time(s): pIII_sse : 4822.000 MB/sec 2 Time(s): IO window: e000-efff 2 Time(s): MEM window: fbf00000-fbffffff 2 Time(s): PREFETCH window: 20000000-200fffff 2 Time(s): Type: Direct-Access ANSI SCSI revision: 05 2 Time(s): Vendor: ATA Model: Hitachi HDS72168 Rev: P21O 2 Time(s): BIOS-e820: 0000000000000000 - 000000000009fc00 (usable) 2 Time(s): BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved) 2 Time(s): BIOS-e820: 00000000000e6000 - 0000000000100000 (reserved) 2 Time(s): BIOS-e820: 0000000000100000 - 000000001f7b0000 (usable) 2 Time(s): BIOS-e820: 000000001f7b0000 - 000000001f7c0000 (ACPI data) 2 Time(s): BIOS-e820: 000000001f7c0000 - 000000001f7f0000 (ACPI NVS) 2 Time(s): BIOS-e820: 000000001f7f0000 - 000000001f800000 (reserved) 2 Time(s): BIOS-e820: 00000000ffb80000 - 0000000100000000 (reserved) 2 Time(s): sda: sda1 sda2 sda3 2 Time(s): ..TIMER: vector=0x31 apic1=0 pin1=2 apic2=0 pin2=0 2 Time(s): 0MB HIGHMEM available. 2 Time(s): 3ware 9000 Storage Controller device driver for Linux v2.26.02.007. 2 Time(s): 3ware Storage Controller device driver for Linux v1.26.02.001. 2 Time(s): 503MB LOWMEM available. 2 Time(s): ATA: abnormal status 0x7F on port 0xD407 2 Time(s): Adding 522104k swap on /dev/sda3. Priority:-1 extents:1 across:522104k 2 Time(s): Allocating PCI resources starting at 20000000 (gap: 1f800000:e0380000) 2 Time(s): BIOS-provided physical RAM map: 2 Time(s): Brought up 1 CPUs 2 Time(s): Built 1 zonelists. Total pages: 128944 2 Time(s): CPU0: Intel P4/Xeon Extended MCE MSRs (24) available 2 Time(s): CPU0: Intel(R) Pentium(R) 4 CPU 3.00GHz stepping 09 2 Time(s): CPU: L2 cache: 1024K 2 Time(s): CPU: Physical Processor ID: 0 2 Time(s): CPU: Trace cache: 12K uops, L1 D cache: 16K 1 Time(s): Calibrating delay using timer specific routine.. 5989.49 BogoMIPS (lpj=11978986) 1 Time(s): Calibrating delay using timer specific routine.. 5989.50 BogoMIPS (lpj=11979013) 2 Time(s): Checking 'hlt' instruction... OK. 2 Time(s): Checking if this processor honours the WP bit even in supervisor mode... Ok. 2 Time(s): Compat vDSO mapped to ffffe000. 2 Time(s): Console: colour VGA+ 80x25 2 Time(s): Copyright (c) 1999-2005 LSI Logic Corporation 2 Time(s): Copyright (c) 1999-2006 Intel Corporation. 2 Time(s): DMI 2.3 present. 2 Time(s): Dentry cache hash table entries: 65536 (order: 6, 262144 bytes) 1 Time(s): Detected 2992.767 MHz processor. 1 Time(s): Detected 2992.772 MHz processor. 2 Time(s): Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) 2 Time(s): ENABLING IO-APIC IRQs 2 Time(s): EXT3 FS on sda1, internal journal 2 Time(s): EXT3 FS on sda2, internal journal 2 Time(s): EXT3-fs: INFO: recovery required on readonly filesystem. 4 Time(s): EXT3-fs: mounted filesystem with ordered data mode. 2 Time(s): EXT3-fs: recovery complete. 1 Time(s): EXT3-fs: sda1: 4 orphan inodes deleted 1 Time(s): EXT3-fs: sda1: orphan cleanup on readonly fs 2 Time(s): EXT3-fs: write access will be enabled during recovery. 2 Time(s): Enabling APIC mode: Flat. Using 1 I/O APICs 2 Time(s): Enabling fast FPU save and restore... done. 2 Time(s): Enabling unmasked SIMD FPU exception support... done. 2 Time(s): ExtINT not setup in hardware but reported by MP table 2 Time(s): Freeing SMP alternatives: 20k freed 2 Time(s): Freeing unused kernel memory: 220k freed 2 Time(s): Fusion MPT SAS Host driver 3.04.01 2 Time(s): Fusion MPT SPI Host driver 3.04.01 2 Time(s): Fusion MPT base driver 3.04.01 2 Time(s): Fusion MPT misc device (ioctl) driver 3.04.01 2 Time(s): I/O APIC #2 Version 32 at 0xFEC00000. 2 Time(s): ICH5: IDE controller at PCI slot 0000:00:1f.1 2 Time(s): ICH5: chipset revision 2 2 Time(s): ICH5: not 100% native mode: will probe irqs later 2 Time(s): IP route cache hash table entries: 4096 (order: 2, 16384 bytes) 2 Time(s): IPv4 over IPv4 tunneling driver 2 Time(s): Initializing CPU#0 2 Time(s): Initializing Cryptographic API 2 Time(s): Inode-cache hash table entries: 32768 (order: 5, 131072 bytes) 2 Time(s): Intel MultiProcessor Specification v1.4 2 Time(s): Intel machine check architecture supported. 2 Time(s): Intel machine check reporting enabled on CPU#0. 2 Time(s): Intel(R) PRO/1000 Network Driver - version 7.1.9-k4-NAPI 2 Time(s): Kernel command line: auto BOOT_IMAGE=linux ro root=801 nousb 2 Time(s): Linux agpgart interface v0.101 (c) Dave Jones 2 Time(s): Linux version 2.6.18.1-xxxx-grs-ipv4-32 (root@kernel-32.ovh.net) (version gcc 3.3.5 (Debian 1:3.3.5-13)) #2 SMP Fri Nov 3 23:04:19 CET 2006 2 Time(s): Memory: 506412k/515776k available (2860k kernel code, 8896k reserved, 1080k data, 220k init, 0k highmem) 2 Time(s): Mount-cache hash table entries: 512 2 Time(s): NET: Registered protocol family 1 2 Time(s): NET: Registered protocol family 16 2 Time(s): NET: Registered protocol family 17 2 Time(s): NET: Registered protocol family 2 2 Time(s): Netfilter messages via NETLINK v0.30. 2 Time(s): OEM ID: ASUSTeK Product ID: APIC at: 0xFEE00000 2 Time(s): PCI quirk: region 0480-04bf claimed by ICH4 GPIO 2 Time(s): PCI quirk: region 0800-087f claimed by ICH4 ACPI/GPIO/TCO 2 Time(s): PCI->APIC IRQ transform: 0000:00:02.0[A] -> IRQ 16 2 Time(s): PCI->APIC IRQ transform: 0000:00:1f.1[A] -> IRQ 18 2 Time(s): PCI->APIC IRQ transform: 0000:00:1f.2[A] -> IRQ 18 2 Time(s): PCI->APIC IRQ transform: 0000:01:0d.0[A] -> IRQ 23 2 Time(s): PCI: Bridge: 0000:00:1e.0 2 Time(s): PCI: Enabling device 0000:00:1f.1 (0005 -> 0007) 2 Time(s): PCI: Ignore bogus resource 6 [0:0] of 0000:00:02.0 2 Time(s): PCI: Ignoring BAR0-3 of IDE controller 0000:00:1f.1 2 Time(s): PCI: PCI BIOS revision 2.10 entry at 0xf0031, last bus=1 2 Time(s): PCI: Probing PCI hardware 2 Time(s): PCI: Transparent bridge - 0000:00:1e.0 2 Time(s): PCI: Using IRQ router PIIX/ICH [8086/24d0] at 0000:00:1f.0 2 Time(s): PCI: Using configuration type 1 2 Time(s): PID hash table entries: 2048 (order: 11, 8192 bytes) 2 Time(s): Processor #0 15:4 APIC version 20 2 Time(s): Processors: 1 2 Time(s): Real Time Clock Driver v1.12ac 4 Time(s): SCSI device sda: 160836480 512-byte hdwr sectors (82348 MB) 4 Time(s): SCSI device sda: drive cache: write back 2 Time(s): SCSI subsystem initialized 2 Time(s): SGI XFS with large block numbers, no debug enabled 2 Time(s): SMP alternatives: switching to UP code 2 Time(s): Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing disabled 2 Time(s): Setting up standard PCI resources 2 Time(s): Software Watchdog Timer: 0.07 initialized. soft_noboot=0 soft_margin=60 sec (nowayout= 0) 2 Time(s): TCP bic registered 2 Time(s): TCP bind hash table entries: 8192 (order: 4, 65536 bytes) 2 Time(s): TCP established hash table entries: 16384 (order: 5, 131072 bytes) 2 Time(s): TCP reno registered 2 Time(s): TCP: Hash tables configured (established 16384 bind 8192) 2 Time(s): Time: tsc clocksource has been installed. 1 Time(s): Total of 1 processors activated (5989.49 BogoMIPS). 1 Time(s): Total of 1 processors activated (5989.50 BogoMIPS). 2 Time(s): Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2 2 Time(s): Using IPI Shortcut mode 2 Time(s): VFS: Disk quotas dquot_6.5.1 2 Time(s): VFS: Mounted root (ext3 filesystem) readonly. 2 Time(s): ata1: SATA max UDMA/133 cmd 0xD400 ctl 0xD002 bmdma 0xC000 irq 18 2 Time(s): ata2.00: ATA-7, max UDMA/133, 160836480 sectors: LBA48 NCQ (depth 0/32) 2 Time(s): ata2.00: ata2: dev 0 multi count 16 2 Time(s): ata2.00: configured for UDMA/133 2 Time(s): ata2: SATA max UDMA/133 cmd 0xC800 ctl 0xC402 bmdma 0xC008 irq 18 2 Time(s): ata_piix 0000:00:1f.2: MAP [ P0 -- P1 -- ] 2 Time(s): device-mapper: ioctl: 4.7.0-ioctl (2006-06-24) initialised: dm-devel@redhat.com 2 Time(s): drivers/rtc/hctosys.c: unable to open rtc device (rtc0) 2 Time(s): e100: Copyright(c) 1999-2005 Intel Corporation 2 Time(s): e100: Intel(R) PRO/100 Network Driver, 3.5.10-k2-NAPI 2 Time(s): eth0: Yukon Gigabit Ethernet 10/100/1000Base-T Adapter 2 Time(s): eth0: network connection up using port A 2 Time(s): floppy0: no floppy controllers found 2 Time(s): found SMP MP-table at 000ff780 2 Time(s): ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx 2 Time(s): io scheduler anticipatory registered (default) 2 Time(s): io scheduler cfq registered 2 Time(s): io scheduler deadline registered 2 Time(s): io scheduler noop registered 2 Time(s): ip_conntrack version 2.4 (4029 buckets, 32232 max) - 224 bytes per conntrack 2 Time(s): ip_tables: (C) 2000-2006 Netfilter Core Team 4 Time(s): kjournald starting. Commit interval 5 seconds 2 Time(s): klogd 1.4.1, log source = /proc/kmsg started. 2 Time(s): loop: loaded (max 8 devices) 4 Time(s): md: ... autorun DONE. 4 Time(s): md: Autodetecting RAID arrays. 4 Time(s): md: autorun ... 2 Time(s): md: bitmap version 4.39 2 Time(s): md: linear personality registered for level -1 2 Time(s): md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 2 Time(s): md: multipath personality registered for level -4 2 Time(s): md: raid0 personality registered for level 0 2 Time(s): md: raid1 personality registered for level 1 2 Time(s): md: raid4 personality registered for level 4 2 Time(s): md: raid5 personality registered for level 5 2 Time(s): md: raid6 personality registered for level 6 2 Time(s): megasas: 00.00.03.01 Sun May 14 22:49:52 PDT 2006 2 Time(s): mice: PS/2 mouse device common for all mice 2 Time(s): migration_cost=0 2 Time(s): monitor/mwait feature present. 2 Time(s): mptctl: /dev/mptctl @ (major,minor=10,220) 2 Time(s): mptctl: Registered with Fusion MPT base driver 2 Time(s): raid5: automatically using best checksumming function: pIII_sse 1 Time(s): raid5: using function: pIII_sse (4821.000 MB/sec) 1 Time(s): raid5: using function: pIII_sse (4822.000 MB/sec) 1 Time(s): raid6: int32x1 862 MB/s 1 Time(s): raid6: int32x1 863 MB/s 2 Time(s): raid6: int32x2 795 MB/s 2 Time(s): raid6: int32x4 708 MB/s 1 Time(s): raid6: int32x8 543 MB/s 1 Time(s): raid6: int32x8 544 MB/s 1 Time(s): raid6: mmxx1 1831 MB/s 1 Time(s): raid6: mmxx1 1840 MB/s 2 Time(s): raid6: mmxx2 2122 MB/s 2 Time(s): raid6: sse1x1 1057 MB/s 1 Time(s): raid6: sse1x2 1208 MB/s 1 Time(s): raid6: sse1x2 1210 MB/s 1 Time(s): raid6: sse2x1 2099 MB/s 1 Time(s): raid6: sse2x1 2101 MB/s 1 Time(s): raid6: sse2x2 2252 MB/s 1 Time(s): raid6: sse2x2 2254 MB/s 1 Time(s): raid6: using algorithm sse2x2 (2252 MB/s) 1 Time(s): raid6: using algorithm sse2x2 (2254 MB/s) 2 Time(s): scsi0 : ata_piix 2 Time(s): scsi1 : ata_piix 2 Time(s): sd 1:0:0:0: Attached scsi disk sda 4 Time(s): sda: Write Protect is off 2 Time(s): serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A 2 Time(s): serio: i8042 AUX port at 0x60,0x64 irq 12 2 Time(s): serio: i8042 KBD port at 0x60,0x64 irq 1 2 Time(s): tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com> 2 Time(s): tun: Universal TUN/TAP device driver, 1.6 2 Time(s): using mwait in idle threads.
---------------------- Kernel End -------------------------
We're looking to bring in a T3 for our small startup hosting company and when we do traces from multiple location it always runs through a cox.net IP and it concerns me because I dont want our customers to believe they're being hosted on some kids cablemodem. What do you folks suggest, the IP is 64.19.96.5 to their outer router. Should it be a concern that we route through everyone through a cox.net IP?
This is a follow-up to my original thread [url] regarding my client's experiences with HostWay.
I simply can't believe all of this, and I went through it.
The low points of the whole thing work out like this: In order to set up an SSL for my client's site, we needed a dedicated IP. If you do a traceroute on my client's URL, it resolves to someone else all together. Int he mean time, I tried to purchase an SSL through HostWay, and when they didn't respond in a week and a half, I e-mailed cancelling the order, and purchased a (thankfully) inexpensive SSL from GoDaddy. I got an e-mail back within hours from HW saying, literally, "We never processed your SSL order, so there's nothing to cancel. Let us know if there's anythig else we can do."
As I mentioned before, e-mail support seems to be handled off-shore, and it takes over a week for most answers. Phone support gets you, I found out, a service in Florida.
While the people I dealt with on the phone were always professional and polite, they literally could do almost nothing. I was told several times, "I have to e-mail someone in Chicago - no, I don't know who it is, all I have is an e-mail address."
Back to the SSL - seems HostWay already installed one on my client's site at some point - and it had nothing to do with my client. You could visit a secure version of the site, and it would tell you not to enter, as the cert didn't match the site.
My client and I both were on the phone with the Florida 800 number for hours at a time.
Average wait time to speak to someone was 30 minutes or so. I'm not carping about that part - but they were feeding us false information which was supposedly fed them from "Chicago." Specifically, I told them on the phone and via e-mail that the IP didn't resolve correctly, and that the old cert needed to be removed before a new one could go on (and only their SSL team can install certs, supposedly).
They told my client that the GoDaddy cert was causing them problems, and that it needed to be cancelled before they could install one of their GeoTrust certs. I nuked it - even though I knew better - and of course nothing was done. They lied to my client for several days, saying the new cert was installed (even though I knew it wasn't, and I told my client so, and showed them HW's tech was passing on false information).
This situation went on for almost two weeks. Finally, Monday night, my client got a supervisor based in British Columbia, Canada, who promised that he would walk "the tech admin" through fixing the problems that night. But that was only after my client threatened to pull his account.
Well, the IP is still screwed up, but they replaced the cert that night with one for which they charged my client an arm and a leg. The CC processing company is happy, so we let it ride, and they're now processing payments over the web.
If this is confusing, it's because I condensed many long days and nights into a few short paragraphs. Let's just say that HW didn't have their thinking caps on tight, because they committed to their preposterous stories to e-mails which we all received.
Later this year, at a conference to be held in Canada, a committee of nuclear power station operators will be discussing whether or not they should keep HW as the host of their site. Gee, I wonder what the consensus will be.
Uptime = 0 days 0 hrs 4 min 15 sec Avg. qps = 17 Total Questions = 4479 Threads Connected = 1
Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations
To find out more information on how each of these runtime variables effects performance visit: [url]
SLOW QUERIES Current long_query_time = 10 sec. You have 1 out of 4491 that take longer than 10 sec. to complete The slow query log is NOT enabled. Your long_query_time may be too high, I typically set this under 5 sec.
WORKER THREADS Current thread_cache_size = 128 Current threads_cached = 6 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine
MAX CONNECTIONS Current max_connections = 2000 Current threads_connected = 1 Historic max_used_connections = 7 The number of used connections is 0% of the configured maximum. You are using less than 10% of your configured max_connections. Lowering max_connections could help to avoid an over-allocation of memory See "MEMORY USAGE" section to make sure you are not over-allocating
MEMORY USAGE Max Memory Ever Allocated : 96 M Configured Max Per-thread Buffers : 10 G Configured Max Global Buffers : 58 M Configured Max Memory Limit : 10 G Total System Memory : 3.95 G
Max memory limit exceeds 85% of total system memory
KEY BUFFER Current MyISAM index space = 78 M Current key_buffer_size = 16 M Key cache miss rate is 1 : 735 Key buffer fill ratio = 8.00 % Your key_buffer_size seems to be too high. Perhaps you can use these resources elsewhere
QUERY CACHE Query cache is enabled Current query_cache_size = 32 M Current query_cache_used = 4 M Current query_cach_limit = 1 M Current Query cache fill ratio = 14.83 % Your query_cache_size seems to be too high. Perhaps you can use these resources elsewhere MySQL won't cache query results that are larger than query_cache_limit in size
SORT OPERATIONS Current sort_buffer_size = 2 M Current record/read_rnd_buffer_size = 256 K Sort buffer seems to be fine
JOINS Current join_buffer_size = 1.00 M You have had 0 queries where a join could not use an index properly Your joins seem to be using indexes properly
OPEN FILES LIMIT Current open_files_limit = 10000 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine
TABLE CACHE Current table_cache value = 1024 tables You have a total of 721 tables You have 93 open tables. The table_cache value seems to be fine
TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 212 temp tables, 0% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine
TABLE SCANS Current read_buffer_size = 1 M Current table scan ratio = 17754 : 1 You have a high ratio of sequential access requests to SELECTs You may benefit from raising read_buffer_size and/or improving your use of indexes.
TABLE LOCKING Current Lock Wait ratio = 1 : 76 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'
how to make the changes in red? My server works good for awhile, but then gets REALLY REALLY slow.
I run the top command when memory usage seems to be running high on my server. I look at it and blink and have no real idea whether things are "okay" or not.
I apologize for the extreme basicness of this question. At the same I would love to have some kind of personal benchmark of "okayness" for this server so I can look at top results when things are dreadfully wrong and recognize it.
Based on these results would you say the server is holding up under traffic? -----------------------------------
Now we do notice, that quite frequently, the connection times out or the server responds slowly. A friend of mine said the VPS could be the issue of this.
I ran a ping test today for several hours, sending one ping with 16 bytes with a timeout set to 3 seconds.
From 18000 pings sent (5 hours), 120 failed. Which comes to one failure in 150 pings, or one failure every 2.5 minutes.
Is this still an acceptable failure rate or do I have reason to contact our VPS provider? According to our own usage statictics, we are not using much of the server's capacities.
I just read Peer 1 financials -they got big this year!
Does anyone know how many actual customers they have? The financials say $74.36 Million... with 9,000 customers... humm... they must have more than that? Just curious.
I'm new to server administration/security/troubleshooting, so I have included a lot of info here hoping it will help.
This started because a Linux VPS with CentOS and Exim crashed after only 3000 emails were sent (of 30000) total
I ran a netstat and several times I get three separate ips with the only difference being the last two digits and the port number: 86.104.230.29:59009 86.104.117.45:18065 89.37.137.157:41593
As far as I can tell they are from Romania, and there are several connections.
I have posted a lot of information below, if someone can take a look and give some ideas, it would be very much appreciated.
Hope there is a DNS expert about that can make sense of what I observe and give an unbiased opinion.
We are currenlty evaluating hosted DNS providers. Anycast DNS seems like a great feature to have and we want fail over too. Narrowed down a list of possible suppliers: DNS Made Easy, Netriplex and Dynect.
After reading up on some blogs 1 & 2 mainly, we setup a Pingdom test to evaluate our three candidates.
For DME I used their own site URL for testing, Netriplex and Dynect gave us dedicated test accounts.
The average response times roughly follow the prices, DME is slowest, Netriplex next and Dynect is the winner. I have detailed logs in anyone is interested (in CSV).
Now for the unexpected results. All 3 providers give very long response times a few times a day - sometimes as long as 5 or 10 seconds. Now and again we see a timeout - i.e. a response of over 15 seconds.
We cross checked by running a testing our current non-anycast Rackspace DNS - similar outliers are present too.
Pingdom tech support think these outliers could be due to peering issues on the internet.
I would expect anycast DNS to be much more robust and to give decent response times even if there are localised networking issues.
So our outliers are either down to the way Pingdom does the testing, or just a 'feature' of the way DNS works.
Anyone with any bright ideas on how to explain this?