I work for a start up as well as do independent consulting. I had a friend set up my apache server initially for the start up. It's been running fine. Now I want to host a client's site on the same server. I tired just adding a vhost section at the bottom of the conf file, but it said that I was running two sites off the same port. I tried manipulating the stuff my buddy did for me for the first site, then putting two vhosts on the bottom, but that just didn't work.
So why does it still say Connection: Keep-Alive? I know that if its off, it should say Connection: Closed. And where did the timeout=1 and max=100 come from?
My httpd.conf Timeout 90 KeepAlive Off KeepAliveTimeout 15 KeepAliveRequests 10
HTTP headers is reporting Keep-Alive: timeout=1, max=100 Connection: Keep-Alive
I wasn't sure where to post this so here goes, I need to migrate a MySQL DB, in the past I have just created an SQL file and used that method (sometimes having to split the SQL file up) but now the DB is about 50 meg and 733,233 records.
Is there an easier way to migrate the Database from one server to another?
I optimized a mysql table of 2 million records and about 500MB.. it took about 15 minutes.. However, on the same DB now i have another huge table of 88 million records, it size is 2.2 GB and it has about 30 MB to optimize... my questions..
1.- How can I speed up the optimization process so it can take the less possible time? any tweaks to my.cnf?
2.- Should I repair it using phpmyadmin or just from the shell?
3.- Should I stop http traffic during this optimization?
This is a dedicated db mysql server that handles a large VB forum with 5-8 users online average:
if there was a way for me to cut up a large mysql database in shell. We're talking almost 8 gigs here.
I would like to import it in chucks, something like 1 gig per shot. Is that doable with some software that I can use in shell? I wouldn't want to use vim to splice up that sucker.
Or is there a better method of doing this? I already dumped this and I rather not do it again unless if I could perhaps dump this in chunks.
options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953; }; # End of rndc.conf
# Use with the following in named.conf, adjusting the allow list as needed: #key "rndc-key" { # algorithm hmac-md5; # secret "KLGSBmWZrev0I4fR4Tm4GXxdcYSTFzF23b1f9is1M="; #}; # # controls { # inet 127.0.0.1 port 953 # allow { 127.0.0.1; } keys { "rndc-key"; }; # }; # End of named.conf Then i took a look at named.conf
Code: options { /* make named use port 53 for the source of all queries, to allow * firewalls to block all ports except 53: */
//query-source port 53;
/* We no longer enable this by default as the dns posion exploit has forced many providers to open up their firewalls a bit */
// Put files that named is allowed to write in the data/ directory: directory "/var/named"; // the default pid-file "/var/run/named/named.pid"; dump-file "data/cache_dump.db"; statistics-file "data/named_stats.txt"; /* memstatistics-file "data/named_mem_stats.txt"; */ };
logging { /* If you want to enable debugging, eg. using the 'rndc trace' command, * named will try to write the 'named.run' file in the $directory (/var/named"). * By default, SELinux policy does not allow named to modify the /var/named" directory, * so put the default debug log file in data/ : */ channel default_debug { file "data/named.run"; severity dynamic; }; };
// All BIND 9 zones are in a "view", which allow different zones to be served // to different types of client addresses, and for options to be set for groups // of zones. // // By default, if named.conf contains no "view" clauses, all zones are in the // "default" view, which matches all clients. // // If named.conf contains any "view" clause, then all zones MUST be in a view; // so it is recommended to start off using views to avoid having to restructure // your configuration files in the future.
view "localhost_resolver" { /* This view sets up named to be a localhost resolver ( caching only nameserver ). * If all you want is a caching-only nameserver, then you need only define this view: */ match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes;
zone "." IN { type hint; file "/var/named/named.ca"; };
/* these are zones that contain definitions for all the localhost * names and addresses, as recommended in RFC1912 - these names should * ONLY be served to localhost clients: */ include "/var/named/named.rfc1912.zones"; };
I have a busy dating website with 30 000 registered users and ~200-600 users online all the time. I would like to offer free email with ~10 MB mailbox to all users.
I have an idea to use scripts provided by b1gmail.de. Its similar to Hivemail or Socketmail. It uses only one POP3 catch-all mailbox and stores all emails in MySQL database, including attachments.
My worries are about MySQL. If I have 30 000 users and each user has some 5000 messages in his/her mailbox: 30 000 x 5 000 = 150 000000
That's 150 million rows in one table!
I know, not all users will have 50000 messages in their mailboxes, but the number of users increases about +2000/month.
I can't imaggine how long time will need MySQL to find messages for each user in the table with 150 000000 rows.
I don't know - maybe it's not a problem at all. I just never had such large tables and I don't know if it's possible at all.
Another problem: I have Fedora Core 2 installed and even don't know yet if it supports files larger than 2 GB.
Maybe it's better to set-up normal POP3 mailboxes for all users instead of using one catch-all box and storing data in MySQL?
do not post warnings about spammers. In the beginning I'll provide email addresses only to "gold" members. I opened this thread because I don't want to set-up a system which will hang after a couple of months because MySQL will not be able to handle it or I will have other unknown problems.
My Dedicated server is being slow - hopefully someone can give a helping hand
Processor: AMD Single CPU Dual Core Athlon 4200 Memory: 2048MB RAM Primary Hard Drive: 160GB Operating System : CentOS 4.x X86_64 Bit Control Panel: CPanel uplink port: 100
I have a VPS (ConetOS 4.4, OpenVZ), it all works fine, however on reboot, the resolv.conf gets reset to some nameservers that are no longer in use, How do I change this so after a reboot it uses the nameservers I am using now?
Recently I changed server providers, so now I'm looking for a way to transfer all the data to my new server. I have a total of 420GBs of files in my secondary HDD that need to be transferred.
The old server is at a 10Mbps line, the new one is at a 100Mbps one. From old server, less than half the pipe is being actively used. So theoretically, I should be able to transfer it all in about a week.
I tried 1) SCP. That was waaay too unreliable. And I couldn't get it to restart from the point left on whenever the transfer stopped (like when the servers were restarted).
2) Transfer using a web script. Way too slow, got to about 35GBs, total would take like 2 months.
Is there any other, reliable way of transferring data from server to server?
In reference to my previous post, i want to tranfer accross 7GB of data, approximatly 80,000 files i believe it is (due to a gallery script).
It's currently on another host (on a webhosting account) which uses their own control panel which has no options but to manage databases, the only way i can see to do this is via FTP but it'll take me days. I've tried using compressing and backup scripts, but the damn execution time on the hosts server is too low to allow the files to be zipped. Are there any ways? Can i login to my VPS via SSH and anyhow pull off the files from the other hosts server?
I'am looking for a dedicated server located in USA and Canada (one in US and one in Canada) with unlimited or large (5-10TB) bandwidth on 100mbit port ....
I run a large adult vBulletin community with 70,000 members, 1/2 million posts, 186,000 attachments (a lot video), and closing in on 100 million downloads since our start some odd years ago. I've been battling keeping the site up for quite some time, and I am starting to wonder whether we shot too low on the server setup. I figure I would ask the pros here at WHT for some advice.
This is our current setup:
Site server:
Quote:
CPU: Intel(R) Core(TM)2 Duo CPU E4500 @ 2.20GHz 4 Gig ram 250 Gig sata harddrive Unix FreeBSD 6.2 Apache
MySQL server:
Quote:
CPU: Pentium III/Pentium III Xeon/Celeron (2666.62-MHz 686-class CPU) Cores per package: 4 4 Gig ram 750 Gig SATA harddrive Unix FreeBSD 6.4 Apache
Do you think the site would perform better under one server and maybe a more powerful processor? What should I be looking at exactly as far as hardware goes for this type of site. I should note we push about 2.5TB of bandwidth monthly.
I have found SL can offer you 12*1 TB drive based systems, after RAID-5 and Win 2003 install you get just over 10 TB of storage. The monthly price works out to $1000/Month.
I know some time ago LeaseWeb offered these type of storages....any one else know of any others ?
my main client wants to rehash his database. Now this is a 1.5m strong list. All legitimately collected with time/IP stamp, privacy policy, etc. These clients are from the online gambling industry (legally licensed).
The problem is many of these users subscribed to our services up to 4 years ago (not all are that old, but some are), and they haven't heard from us for up to 2 years (again, some heard from us more recently).
Anyway, I've never deal with that number of emails and potential bounces. Obviously, the first round of emailing will have a large number of bounces, but that will quickly subside.
So, can you guys point me to a quality dedicated server, with at least 4 IPs (hopefully 10) and that can handle this type of activy? I'll be glad to sign up under an affiliate link if I can get a good answer.
My company is going to launch four online retail site and in need for a dedicated server service provider that can provide us with room to grow from minimal traffic to possibly 2-5k traffic an hour.