Can Anyone Share A 100% Pure Cogent 100MB Test File For Me
Jul 23, 2008
if you can share a 100MB download link that I can use to test cogent's speed to my network. Hopefully plugged into a 100MBPS port at the switch to see if it will max out or not.
Is there a way to disable the temporary filename, for instance when I upload a file via ftp, the filename becomes .pureftp-24213421423. It returns back to the original filename when you abord the transfer or when it's completed.
I've been running pure-ftpd for around 4 months now without any problems, until around 24-48 hours ago file upload has been going a bit loopy.
When you upload a file the speed bounces considerably, and at times pauses on 0kbps until it then dies and fails the upload. 9/10 uploads I have tried have failed.
[R] Opening data connection IP: 74.86.20.181 PORT: 35283 [R] LIST -al [R] 150 Accepted data connection [R] 226-Options: -a -l [R] 226 6 matches total [R] List Complete: 374 bytes in 0.64 seconds (0.6 KB/s) Transfer queue completed 1 File failed to transfer [R] Connection lost: chacha We have restarted pure-ftpd a number of times, but have had no luck.
Please could you try and upload a file (at least 10mb and please nothing dodgey) to this FTP account:
I'm sure you all may have heard this question before, so I'm sorry if I'm beating a dead horse...just can't seem to find a good answer. I am interested in setting up a fileserver / fileshare on a VPS so that I can create a mapped drive on a windows PC which points to the fileshare on the VPS. I have a client who currently uses a physical server to perform this task, however this physical server is under-utilized and somewhat unnecessary. I mentioned the possibility of moving to a VPS and he seemed interested. I decided to purchase an entry-level account from VPSLAND to use for testing purposes prior to moving forward with the project. I can't seem to get anything to work so I'm looking for ideas.
I purchased a VPSLAND Windows-based EZ-series VPS with Plesk and all the other bundled goodies.
I'm trying to find a low cost solution for realtime file share replication in a windows environment.
It doesn't look like there are any open source windows cluster filesystems around, so the only viable option I found would be running OpenFiler in a replication cluster on Hyper-V nodes. Has anyone worked with this, does it work reliably?
The required IO throughput on these shares would really be minimal and my biggest concern is 100% availability.
There are always people who would like to know what the php settings are on the server. Is it a security risk to share the phpinfo.php file on a website, with anybody who visits that website, able to view it?
I recently started building out a new network rack to provide a production web site. The new equipment stack includes a disk array providing a CIFS file share to store images to be served up by Apache.
I have had zero luck in getting Apache to properly access the imagestore from the network share. I've read more Google pages on this subject today than I can count, but I am still not having any success getting this working right.
I'll do my best to explain the configuration.
I have an ESXi host running several virtual machines. Each machine needs to be able to access the shares. Each host has multiple network interfaces, each connected to a separate subnet. The virtual machines are running Windows Server 2012 Datacenter edition.
The disk array is file mode access, with NFS and CIFS shares. It has interfaces on both subnets that each VM can reach. I have established a stand alone CIFS server, with the shares configured. They are accessible from the VMs.
I have mapped the share to a drive letter on the VM client, and it works properly from the logged in account. I have full control over files on the file system (create, modify, delete).
The VM has Apache 2.4.9 installed.
Things I've tried with no success:
-created a symlink to the CIFS mounted drive into the webroot directory -added an alias to the CIFS mounted drive -added the aliased directory using the <Directory> directive -added the alias and directory directives using UNC references
I am seeing errors like "path is invalid" mostly, but when I try to add the mapped drive (f or the UNC referenced directory, the Apache service won't start.
I added a separate user for the Apache service, and added it to the group that has privileges to talk to the share, still didn't work.
We have found that we need to limit the amount of cpu uage by users on our video share server. On this server we currently have 20 users on a sharred plan. Thought that the obvious BW usage would be the biggest challenge, as it turns out we havent gone over the 2 TB that we have.
We have come up with an encoding process that uses the 264 codec and gives us excellent results in terms of quality but is very cpu intensive to the point of really slowing down the server when 10 or more users simutaneously are encoding their videos.
Can someone suggest a script that would allows us to limit the file size in terms of MB/GB that each user could upload per month.
So for example a client pays 10.00 per month and we wanted to limit their uploads to a total of 900 MB per month vs the client that is paying 50.00 per month who would have the ability to upload say 8 GB per month.
A few days ago, my friends studying in America recommended me a new popular transfer toolQoodaa. And he told me that it was a quite good software to download files and movies. At first,I was skeptical, but after using it, I found its a good choice to choose Qoodaa. And I have summarized the some features of Qoodaa:
1.Its speed is faster than any other softwares I used before to upload movies.
2.It can download files quickly through downloading links, in a word, it is time-saver and with high efficiency.
3.No limit of space.No matter where you are, it can download fast.
4. Qoodaa is a green software with high security and easy use.
It really can give you unexpected surprise.
I am a person who would like to share with others, and if you have sth good pls share with me
I am having a strange DNS issue on a Cogent circuit using Cogent DNS servers at 66.28.0.45 and 66.28.0.61. What is happening is that some domain requests will timeout the first try. Then subsequent tries will be quick with no timeouts.
I am having a very hard time getting through to Cogent that there might be an issue somewhere and I was wondering if anyone on a Cogent line using the same Cogent DNS servers could also do a test for me to and see if you can reproduce any timeouts.
How I am testing: - Open nslookup (in linux use: nslookup -timeout=2, windows defaults to 2 seconds) - Picking a random domain name (favorite cereal.com, movie title.com, brand name.com, random word.com) - Repeating test for same domain if timeout occurs to see the next query resolve instantly
Here is an example of what Is happening for me: Code: [eger@womp ~]# nslookup -timeout=2 > superman.com ;; connection timed out; no servers could be reached > superman.com Server: 66.28.0.45 Address: 66.28.0.45#53
Non-authoritative answer: Name: superman.com Address: 64.12.47.7 > napaautoparts.com ;; connection timed out; no servers could be reached > napaautoparts.com Server: 66.28.0.45 Address: 66.28.0.45#53
the subject pretty much sums it up, is there a method or solution for multiple websites (whic reside on the same dedicated server) to share just one .htpasswd, or automate the mirroring of said .htpasswd file?
if so any suggestions for methodology or products that would facilitate this action would be most welcome, thx in advance friends..!
When companies say your uplink or port speed is 100mb/s, do they mean you're sharing it with hundreds of other servers or you have 100mb/s that is for you? If its shared, than why even write 100mb/s because you will never truly achieve that.
I have hired a server in Limestone with port of 100MB but when attempt to unload a file the transfer rate does not exceed more of 10MB, but I see that the port is in 80% of its use.
The server is new and he only has installed Windows
I did not put this under the tutorial section because it is not comprehensive enough. Its just a simple rant.
Those of you shopping for a host come to this forum, and are often given the advice to ask for a "test file" to download.
1) Hosts who offer test files will most likely put the file on their fastest server, not the server where your site will actually be hosted.
2) If they have servers in multiple data centers, they will use the one that is the most well-connected, not necessarily the one where your site is going to be.
3) Even if your account is assigned to the same server as the test file, what does your ability to download a static file actually prove? A test file does not show how well a server is going to perform, which is usually the biggest factor in page load times.
4) Barring all the above, and assuming your site is going to consist only of static files, most web site visitors are not all in the same area, so your results may differ from the rest of the people visiting your web site.
A test file can be helpful in rare circumstances, but as a potential customer you would have no way to really know whether your download is a true test of what you can expect, so it is best not to rely on something like that unless you are downloading it from your own hosting account with that host.
The only way to really know how a host is going to perform is to try it out. This is why hosts offer full money-back guarantees and free trials.
We are nearing the end of our contract with cogent and are deciding whether to continue with them. Bandcon has recently (within a last year or so) established its presence in NYC metro area.
Who would you choose among the two? Please give your input and evaluation of the two networks.
So apparently my sales rep is telling me cogent will not give me a second circuit for a redundent line so in other words, no vrrp or hsrp nothing Unless! i purchase another 200mb contract with them.
Anyone else ever have this problem?
not even are they willing to do a port fee for the second gbic i'd take up.
I have been using Cogent for many years and have always been pretty pleased with the service and bandwidth (I know many consider bottom rate/budget bandwidth). I would usually be able to call in and speak with someone who could check routing, login to switches to verify port settings, and make reverse DNS changes right then and there.
Within the last 6 months though I have been getting pretty poor support from them. Seems they are hiring more and more people just to be able to answer phones. The techs seem to have a hard time comprehending even simple reverse DNS requests and always ask me to hold for extended periods of time.
Today I called in and was even asked to hold right as they picked up the phone!! I mean, if your just going to pick up to ask me to hold, why pick up in the first place until you are ready?