Trying To Tar Up A Folder With 100's Of Thousands Of Files = Corrupted
May 9, 2008
OS: CentOS 5.1 32 bit fully updated.
I'm trying to tar a folder that has 100's of thousands of files and I ensured that no files are being added or modified in that folder while the below command is being executed:
nice --adjustment=20 tar -cf users_from.tar users_from
I've tried it multiple times and it always stops before it finishes and ends up with a corrupted .tar file which gives errors when extracted and is obviously missing a lot of files. Sometimes it creates 200+ MB, sometimes 50 MB before it stops.
I also have enough RAM + swap for the operation so that can't be the cause.
So is it just impossible to tar a directory with so many files and is it even possible to get a list of the files in that directory?
I am having problem with a server. On all sites on the server start appearing core.xxxx files that in result fill server. Quotas were disabled because some people had issues logging in on because of error.
Quote:
Sorry for the inconvenience!
The filesystem mounted at /home/*** on this server is running out of disk space. cPanel operations have been temporarily suspended to prevent something bad from happening.
Please ask your system admin to remove any files not in use on that partition.
how to remove all of them so they dont appear again, on some sites there are thousands of core.xxxx files and weigh over 60GB.
We are setting up 100s of new domain names every day. We are only hosting a simple blog, in fact with our current setup we are on a HostGator Reseller account and we are just using one account meaning one single cPanel account.
We have a script that automatically runs when people sign up for our service, this script sets up an addon domain on that same single cPanel account with the same document root. Our modified wordpress blog simply looks at the HTTP_HOST in the config file and opens a separate database tables for every new domain name.
The problem we are running into is not the bandwidth usage, nor storage space. But simply the mass addon domains. The cPanel adddomain.html script seems to be getting ran so many times it is overloading the web server.
So I have read about some other people here WHT that are starting to use a new server software that uses a lot less server resources then WHM and cPanel. So I am wondering what hosting companies can provide that sort of a server.
Specifically:
Storage: A couple hundred GB Bandwidth: 1000s of GB Server software to run PHP scripts and mySQL databases Ability to create 1000s of addon domains every day.
Im facing a very strange problem my forum folder using almost 68 gb of space .but my main folder in forum is uploads folder which used 32gb then where is remaining 34 gb when i try to check the size of directory in tree forum then it show like that
I paid a programmer to make me a custom image script. Everything works perfect... the only problem is that all images are being stored in the same folder, will that make my server too slow? we are talking about thousands of images
There have been no changes made to any sites on my server for which I can pinpoint to be the cause of this problem...
Basically, I received notice that my TMP folder was full at 100%... so a look into what the heck was taking up all the space reveals several weird .MYI and .MYD files for which I have no idea about.
I cannot open them or view any of their contents. I cannot even edit them.
Does anyone have any information about what these are or why they are in my TMP folder?
I am having trouble with moving my files to another server.
i have 1 servers, 1 hosting account (no ssh), both are using different control panel and i need to move all of the media files(movies) in the hosintg account to other server since hosting account contract will expire soon.
So, i am thinking of using FTP to transfer multiple files using mget command for FTP. But the thing is i will need to be there any press enter( to accept download) for every single files to be download. Which is very time consuming since i have hundred of files to move.
So my question is: 1/ is there a better way to move files in this situation?
2/ or Is there a shell script that i can use to download all of the file to my server without pressing enter accepting every single file?
the problem is very big since backup procedure not backup all files and folder. I have a lot of Wordpress installations (some of that made with App Installer and others made by hand).
Into all the above installations, files and folder created by Wordpress, like plugin installed or image, uploaded are not backup!
I have seen that all this files come with www-data:www-data user and group but with read permissions for owner, group and others, so i not able to understand why they are not inside my backups. If i change user and group all works fine.
Check your Wordpress backup and verify the presence of plugin and images (miniature of images are still present since they are created with different user and group)...
how to setup a cron job to copy files & directories from one folder to the root folder. I have CPanel X.
My root directory is public_html/ I have another directory public_html/uploads containing both files and directories.
I need a cron job that will copy all the files & directories from public_html/uploads to the root public_html/
If it helps, here is some system info
General server information: Operating system Linux Service Status Click to View Kernel version 2.6.22_hg_grsec_pax Apache version 1.3.39 (Unix) PERL version 5.8.8 Path to PERL /usr/bin/perl Path to sendmail /usr/sbin/sendmail PHP version 4.4.4 MySQL version 4.1.22-standard cPanel Build 11.17.0-STABLE 19434 Theme cPanel X v2.6.0
I had a Problem with my FTP-Backup space, so PLESK couldn't do the daily backups that I configured. The problem with the ftp backup is solved. The backups are running again but there are still many large temporary files in a plesk folder.
Can I just delete them, or is this a bad idea?
The folder: C:Program Files (x86)ParallelsPleskPrivateTemp
I have noticed that i never install any program on my Server and my files of Web only 5 GB and Windows take 15 GB (My Hard Disk Usage 30 GB). Now my disk space available 1.7 GB. But when i go to check in the Recycler folder. There many files are taking up huge amounts of space, some are in excess of 10 GB . So could i deleting these files? How can I automatically delete contents of Recycler folder?
I have a problem with users that want to download files that are in a protected folder. They don't get the login popup when the click on a link, if they use a direct url then they get the login but the download doesn't begin.
iPhone OS 7.12
Plesk Control Panel version: psa v8.4.0_build20080505.00 os_Windows 2003/2008 Operating systemMicrosoft: Windows 5.2;build-3790;sp2.0;suite272;product3
Currently, we use powerdns with mysql replication on multiple servers. This solution is kindda okay for now but I'd like to know if there is any other better solutions than powerdns.
For the last 6 months our site has been under severe brute force, syn flood attack. They keep bombarding a single URL of the server and it is xml file. They are not attacking any other URL.
We have removed the xml page from our site but still they keep on sending requests, this is for the last 6 months non stop.
The IP has been changed just to see and they are sending several thousand requests per second. The requests come from different IPS and different ranges, so you can not even block the IP’s. They seem to be coming from a legitimate IP’s.
Due to this I have had to pay for an extremely expensive server which holds 8 GB of RAM and quad core processor etc, however, even with this the server server still reaches critical level, just because these requests are eating up my resources.
Our technical team has been working on all aspects of apache server security, external modules, firewall, hardware firewall from beginning but still we are not able to stop them.
We have installed following modules.
4) mod_security
5) mod_evasive
6) Firewall
7) SYS_Cookies enabled
We have worked with the hosting company and their technical team leader, he installed the best CISCO hardware firewall and tried to stop them, but in vain.
We have checked our server to see if anything from our site is causing the request, no extra file uploaded on to the server. For example if some file has been upload or some text has been added to the file (checked if we’ve been hacked). Even though we checked for any hacks, I am still wondering if there is something we do not know about. Can a hack lead to huge amounts of traffic?
We need some help to stop these attacks. We have searched a lot and have found that sites that get attacked like this have only one option is to shut down till it stops. I really hope that will not be the case for us. Please let us know if any one has any ideas to deal with this.
Also could it be our own part of php code which can do this? We are ready to check every php file to make sure it does not have any line of code which can be dangerous?
We worked with hardware firewall company to drop a request on the spot coming for the single URL but it is getting setup.
We have antivirus running on server however if any specific antivirus or antimalware is needed, we can try that.
Following are the details I have got from my linux admin. This will help you to trace the issue in better way. Problem: Apache SYN_RECV
OS - RHEL5 kernels - 2.6.18-92.1.22.el5-x86_64 2.6.18-92.el5-x86_64
OS Type: cat /etc/issue Red Hat Enterprise Linux Server release 5.2 (Tikanga) > cat /proc/version Linux version 2.6.18-92.1.22.el5 (mockbuild@hs20-bc2-5.build.redhat.com) (gcc version 4.1.2 20071124 (Red Hat 4.1.2-42)) #1 SMP Fri Dec 5 09:28:22 EST 2008
Following we have done till now is mentioned below for the configurations.
############### sysctl.conf
############## # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details.
# Controls IP packet forwarding net.ipv4.ip_forward = 0
# Do not accept source routing net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1
# Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1
# Controls the maximum size of a message, in bytes kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_synack_retries = 2 # Enable IP spoofing protection, turn on Source Address Verification net.ipv4.conf.all.rp_filter = 1 # Enable TCP SYN Cookie Protection net.ipv4.tcp_syncookies = 1
# 65536 seems to be the max it will take net.ipv4.ip_conntrack_max = 1048576 net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 87380 8388608 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.netdev_max_backlog = 5000 net.ipv4.tcp_window_scaling = 1
I am running a very successful wiki based website that has outgrown our current web host. The site runs very slow because our host says we are hitting the memory limit on the server (currently under a shard hosting plan).
Thousands of visitors per day Ten thousand page views per day (all PHP) 20GB bandwidth per month MySQL database
My sever is running mailscanner-4.56.8-1. Of late many of our customers complain that mails send To and From our server take hours to be delievered.
I tested this myself by sending test emails to and from my hotmail account which took long time to be received and delivered.
Also, in /var/log/maillog i see entries such as the one below; "Jan 4 20:39:36 www MailScanner[8461]: New Batch: Found 17678 messages waiting "
So i understand there is about 18 thousand emails in MailScanner /var/spool/mqueue.in folder.
To test i stopped MailScanner and started Sendmail, i send an email to my hotmail id and it got delievered immediately, but when i restart MailScanner and resend the same message it took 20mins to get delievered.
- how do i improve MailScanner processing so that messages are delivered faster? - Do i need to change the "Max Children = 5" variable in /etc/MailScanner/MailScanner.conf? - how do i force delivery of the 18thousand emails in mqueue.in folder?
seems like my server has a corrupted tcp/ip stack. can it be resolved without actually reinstalling the OS?
also, what are the possible reasons that could have caused the corrupted tcp/ip stack, because it just happened suddenly out of nowhere =(
with a corrupted tcp/ip stack, there's no way i can remotely access it right? the only way to fix the server is to either get the techs in the datacenter to do it or i'll have to go down personally to do it?
For the last 5 days, exim has been retrying to resend email to a recipent every 1 millisecond.
As result, logs are huge, and load is being affected.
So I'd like to know how can I set/configure exim to ingore sending to any email I'd tell it.
I mean is there any config file I can look into, to set a ignore list, or even how to have it so that it retries sending every 1 hour, instead of every 1 millisecond.
If the disk of my dedicated server is corrupt and i ask for a new disk+reload of OS, this reload should be free or i have to pay for it? (the server is from layeredtech)
Just a warning to all other hosts out there using cPanel it currently generates corrupted mysql backups due to a bug in their pkgacct script which has been like this for over 6 days now.
[url]
So for anyone out there it's an easy fix.
Of course not every host is doing it so the fun of helping a customer migrate data and explain to them that their hosts backups being made are all corrupted is lots of fun.
Sort of sad but I've been told this isn't a critical enough issue to even push it out to all versions. Right now it makes a great lock in so customers cannot switch providers without a lot more work.
My 150gb Raptor drive in my Q9300 + 8gb ram server got corrupt. When restarting, the tech got a halt error:
Windows failed to load because the kernel debugger DLL is missing, or corrupt.
Status: 0xc00000e9
File: Windowssystem32kdcom.dll
Their tech support is good, but I've lost all my valuable data. I can't say that I'm happy about using Limestonenetworks now. I never suspected that I'd be having a disk failure in the first month.
So what I got is a problem that /home partition of the server is heavily corrupted.
NOC has already done a FSCK on the disk, but not all the errors were fixed. So they are running another FSCK.
Here's what manager suggested
Quote:
Hence, to save time, we would suggest an OS reload on the server. We may do OS reload on a fresh hard disk and then attach the current disk as secondary. After that we may use a data recovery tool such DDRESCUE to recover data from the corrupted hard drive.
Since I am bit nervous now (no backups, as chornobs wasn't been able to run it due corrupted hdd for some time) wondering. How often it fails to recover all the data? I know it's hard to say, but just aprox?