Writing Large Files - Risk Of Damaging Filesystem
Jan 19, 2008Does writing large files (ie, 10GB backups in one archive) cause any risk of damaging a linux filesystem?
View 1 RepliesDoes writing large files (ie, 10GB backups in one archive) cause any risk of damaging a linux filesystem?
View 1 RepliesWe're considering deploying a large server that will have 8x 500GB drives in a RAID-10 config. I intend to use a 3ware 9650SE w/ BBU along with A/B power to each of the PSU's.
My question is... since this will return into a 2TB array/partition, in event of a crash (kernel panic, etc -- I expect a power outage will be very, very rare if at all) what do you guys think the fsck time would be? In my experience a RAID BBU significantly drops it, sometimes to the point of no manual fsck required, but in event of a manual fsck shouldn't the BBU be able to provide more consistent data (less errors) and therefore a much shorter fsck? Maybe just recovering the journal?
Linux Fedora 6, Apache 2 with Mod Security, MySQL.
Our mod_sec logs get incredibly large very quickly. In the configuration for mod_security, we have specified logging options as
SecAuditEngine RelevantOnly
SecAuditLogRelevantStatus "^[45]"
but the mod_sec.log gets to almost 10 GB (in a matter of 5-6 days) before it is truncated to mod_sec.log.1 and a new one is created.
Is there a way we can specify that a max size of one log file is 1 GB, for example?
Or another question, how come it gets so huge so quickly? We thought that logging "RelevantOnly" will only display errors / requests that are deemed security risks.
I have a customer who wants to sell access to videos of conferences he runs.
Each flv vid is approx 1 - 1 1/2 hors long approx 380MB each and there will be about 12 videos per conference.
approx 4 - 8 conferences per year.
My customer suggests 10 - 20 people will buy access to watch each video.
Access to watch the videos will be through a password protected webpage.
issue - the current site hosting company only allow uploads up to 150MB per file.
Can I host the flash videos elsewhere and deliver them through the password protected web page without anyone else being able to see them via server they are hosted on?
This would also reduce the bandwidth going through his current site server.
I am trying to locate what large file are filling up the / on the server but I am having trouble using the find command to do this.
View 1 Replies View RelatedI'm working on a web site which will basically be a flash games portal. I have a dedicated server running Apache 2 on a 100mbit dedicated line but my download speed for large files (flash files of over 5mbs) is really slow. I am thinking this is because of Apache but I don't know much about this. I've read that I should change for a lighter http server for serving static files. The way my server is set up is I have 2 virtual machines running, one doing the PHP processing and the other serving static files, both running Apache, so if I have to change HTTP server for the static files it would be very easy. Although I am not sure if this is necessary or if I can tune Apache to push files faster than this.
View 8 Replies View RelatedI'm facing a very strange FTP issue with one of my shared-hosting accounts, while all of my other servers are having no problems but only this one, when I try to upload a file (whatever file) larger than 500kb from my local PCs, in most cases, the file would stop uploading during the process and hang there until it times out.
There are 2 interesting things though:
The file transmission typically hangs when approximately 248kb of the file have been transferred, pls see the attached screenshot for example.
If you look at the attached screenshot, you will notice that the uploading transmission hangs when 248kb of the file have been transferred. This is very strange and what I mean is that for example, I randomly pick up a file, and attempt to upload it onto my host for 10 times, now see, 5 times it will hang when 248kb of the total size have been transferred, 3 times it will hang at other points *near* 248kb (224kb or 280kb typically), 1 time it will hang at another random point, and 1 time it might be uploaded successfully (yes, there is still a tiny chance for the file to be uploaded successfully).
My default internet uploading speed is 80kb/s-100kb/s, lately I found that, when I limit the uploading speed on my FTP client (e.g. max. 30kb/s), everything WILL WORK without any problem! No hangs, no interrupt.. Whereas when I free up the uploading speed limitation and let it upload with my regular speed, the problem appears again.
It seems to me that the FTP hangs only when the uploading speed is higher than 60kb/s. However my host provider told me that they have customers uploading without any problem at over 400kb/s, and they said "there's no problem or limitations on the server at all".
Up until now, I have done following things to troubleshoot the issue but with no luck:
Contacted my host.
Disabled/Enabled the PASV mode on my FTP client.
Tried different FTP clients on different computers (FlashFXP and Filezilla).
Rebooted my router and reseted everything with the factory default settings.
Contacted my ISP for the issue, they "did something" but nothing were helpful.
Rebooted all my PCs.
Disabled both firewalls on my PC and on the router.
Furthermore, I have asked another friend of mine in another city with another ISP to test the FTP uploading, but unfortunately he got the exact same problem. And I've done some search on the internet for hours but no one seemed to have the same problem..
I just logged into my VPS and was astonished by how much space I have in use.
8.09GB... but I can't figure out what's using up so much space!?
How can I find out were large files are located ? Since it's increasing daily
I use LXAdmin with HyperVM Control Panel
Just noticed quite a few large Core. files within one of our websites (within a sub folder of public_html). Anyone knwo what these are and how they got there?
View 3 Replies View RelatedI have a debian box, and have archived a gallery in to a .tar file, 5.77gb.
I have a centOS box, and have used wget to bring the data file over to the new server.
However upon doing so it only detects it as 1.8gb when it starts downloading.
I have terminal access to both servers, just trying to bring my files over from one server to another.
I've been using Lypha for the past 4 years, but they've taken the last straw (gigabytes of backups went missing and they wont reply to emails as to why).
Looking for a web hosting package for under $10/month that has large enough disk-space/bandwidth to allow me to backup large audio / video files to it, as well as the normal site operation (I use it for portfolio website, as well as hosting additional domains)
I am developing a web application for a private investigative firm. They do surveillance work and therefore have surveillance videos. I would like the capabilities of uploading the videos online and allowing the client to login and view their surveillance video online.
Currently, we get the video from the PI, put it on a DVD and then mail it to the client.
This takes too long. We want the client to be able to view the video online.
Some of these videos can be up to 2 hours long.
First, is this even possible?
Second,
- how much bandwidth would a website like this take?
- Is there a host that can hold hundreds of GB of video?
I want to convert it to flash to save file size and also so I can stream it.
I have some 100's of MB's to move and I'm definitely not doing it by transferring it via my PC / FTP.
I seen all the tutorials on how to move your MySQL databases, but what about full folders etc, how do I move those (putty?)?
I have four servers with a quad xeon, 4gb ram, and 2x300GB SAS 15K RAID0 harddrives, pushing a total of 1.6gbits. It serves a lot of zip files with an average flesize of 180mb. My question is, how can I optimize lighttpd 1.4.19 to push its max with very low IO-wait. I've looked up some stuff and only found options that deal with lighttpd 1.5 and use Linux-AIO for the backend network. Currently I use writev with 16 workers and an read/write idle timeout of 10s. Logging is off, too.
View 14 Replies View RelatedSomething weird happening here. I have tried every string possible...
There are a number of folders I want to remove off my server, tried the good old and simple...
rm -r /folder/
And then went and ended up with a string as long as my screen. No matter what I do, as it goes recursive in to the directory it asks me if I want to remove each file individually. No matter what string or action I take it insists on asking me as it goes to delete each file.
Could this be a configuration option in CentOS?
i just wana know is it safe to do remote daily backup for about 70,000 files?
file sizes is about 200kb and every day i have about 1000 new file, so rsync first should check old files becouse i am deleting about 30-50 of them daily and them backup new 1000 files ,
so how much it will take every time to compare that 70,000 files?
i have 2 option now:
1-using second hdd and raid 1
2-using rsync and backuping to my second server , so i can save about $70 each month.
I've got a client who wants to host audio files... Here are the sizes:
50 x 75MBs
300 x 10MBs
400 x 5MBs
That totals 8750MBs or 8.75GBs... If he gets hundreds of visitors, it could end up being 1000's of GBBs or bandwidth.
I don't know what to look for to support so much bandwidth... Do you buy bandwidth? Are their special companies out their that host it for you?
Domain has PHP Settings in Plesk set to 2G and I get this error when uploading a 48MB file using Wordpress. I assume I need ot modify this manually in conf file somewhere to allow uploading large files?
Requested content-length of 48443338 is larger than the configured limit of 10240000..
mod_fcgid: error reading data, FastCGI server closed connection...
I have a 6GB backup file created with another Plesk Backup Manager, now I trying to upload this backup file to my Plesk Backup Manager but after upload 3% I am getting "413 Request Entity Too Large" error, I tried with disable NGINX but still getting this error.
how can I resolve this error or is their any other way to upload my file on backup manager?
I see that Backup Manager have a file size restriction of 2GB how can I increase this?
I wanted to make a cronjob with rsync, but I was wondering what happens when a user is uploading a file and the file is being created and at the same moment rsync synchronizes with another server.
Will the file be skip, partially be synced, ...?
In reference to my previous post, i want to tranfer accross 7GB of data, approximatly 80,000 files i believe it is (due to a gallery script).
It's currently on another host (on a webhosting account) which uses their own control panel which has no options but to manage databases, the only way i can see to do this is via FTP but it'll take me days. I've tried using compressing and backup scripts, but the damn execution time on the hosts server is too low to allow the files to be zipped. Are there any ways? Can i login to my VPS via SSH and anyhow pull off the files from the other hosts server?
My account has been hacked with every index.php page defaced. I've cleaned up and my shared wehost is pointing at me saying there shouldn't be any 777 permissions for any files in there.
I used 777 to allow php to add records in a txt file and in an xml file.
Is there a better / more secure chmod code I can use?
Those are the only two instances where I need php to write to a file and those files shouldn't be served to anyone, I do not want anyone to be able to access them.
How can I secure them while letting php write in them?
send mail from my server doesn't work!
exim log give:
Code:
2007-08-24 16:52:18 1IOZa6-0007Ct-8y ** xserverx@hotmail.com R=lookuphost T=remote_smtp: SMTP error from remote mail server after MAIL FROM:<root@server.XXXX..info> SIZE=1419: host mx3.hotmail.com [65.54.244.200]: 501 Invalid Address
2007-08-24 16:52:18 1IOZaE-0007DC-Do Error while reading message with no usable sender address (R=1IOZa6-0007Ct-8y): at least one malformed recipient address: root@server.XXXX..info - domain missing or malformed
2007-08-24 16:52:18 1IOZa6-0007Ct-8y Process failed (1) when writing error message to root@server.XXXX..info (frozen)
I'm trying to change url structure so instead of /default/category/product.html it would show /category/product.html
With this line I've managed to do it on my personal blog
RedirectMatch 301 /default/(.*) //$1
But when I've implemented it on a customers Magento site it started showing double slashes like this //category/product.html and the whole template just collapsed .
my domain name expires in July (within 90 days).
It is currently with company A who charge quite a lot to keep it there. I want to move it to company B who are my hosts and with whom I get 1 free domain name.
One added complication is that the domain is in a friends name, but I have logon and can change name to my own any time I want.
Company B said "After it is on our registrar, you will be the only one that can renew it as long as it doesn't expire for longer than 90 days."
This has me worried that because I haven't renewed it withing 90 days that it can be stolen from me. Have I misunderstood or is this a risk?
If so would I be better advised to renew it in my friends name wth company A?
I currently have a VPS. I have installed cPanel/WHM + CSF Firewall.
Everything is fine and all the ports are closed except for the ones I need.
I currently have some issues I need to fix, but google isn't helping
Quote:
Check /tmp is mounted as a filesystemWARNING/tmp should be mounted as a separate filesystem with the noexec,nosuid options set
I tried googling this and there was a cPanel script but I do not have permission to run it. So does anyone mind explaining it to me one step at a time?
Quote:
You should consider adding ini_set to the disable_functions in the PHP configuration as this setting allows PHP scripts to override global security and performance settings for PHP scripts. Adding ini_set can break PHP scripts and commenting out any use of ini_set in such scripts is advised
I have disabled this in php.ini but I do not know why it still says that I have to fix this
I have a server with pacificrack. When I see my website it says "Read-Only File system" and my SSH dies with read-only file system errors.
It happens once per month, and the bad thing is I'm sleeping when it happens so I don't know my server is in this state.
Even though support fixes the problem within 15minutes of my support ticket, my server is down for hours because I don't know its down to then raise a ticket. I asked support why it happens and they never tell me... just tell me to read the logs but I'm not technical enough to understand linux logs etc.
Server is centos.
Somehow the filesystem on my server turned read-only. The databases would not connect. When I rebooted the box it would not come up.
Support had to run FSCK. Even after, the databases had to be repaired using the CPanel tool. Things seem to be ok now, however:
1. What could have caused this to happen? Software or hardware failure?
2. I get these notifications, should I be worried?
WARNING: Kernel Errors Present
EXT3-fs error (device sda2): e...: 1 Time(s)
I'm a Windows guy and can little or nothing about Linux. How big risk do I take if I'm using a Linux VPS and never update/patch the kernel?
I'm using CentOS 5 and LxAdmin. I can update the control panel, but I can not update/patch the kernel since I have no knowledge how I do that.
I'm using a unmanaged plan, so no help there.
Some of my sites are running Wordpress, but I'm always using the lates WP installation. I not using any other plugins that WG2, Gallery2, and remove max width.
Nobody except me have access to the VPS, and I have no other FTP accounts or something like that on the VPS.
I have no other scripts or any kind of dynamic pages on my VPS.
What kind of risk do I have here?
I'm currently having plans to cancel my second VPS that's using Win2003, and only use Linux in the future. I can cut my monthly expensive with 50% that way, but do I take a big risk doing it that way?