Revision as of 08:18, 4 June 2011 editPaul Stansifer (talk | contribs)Extended confirmed users1,718 edits →Does it need to be 64-bit to use both cores?← Previous edit | Revision as of 08:18, 4 June 2011 edit undoApLundell (talk | contribs)Extended confirmed users, Pending changes reviewers9,457 edits →User DMCA report systemNext edit → | ||
Line 157: | Line 157: | ||
::{{xt|doesn't need your help}} -- Hm, that's what I was wondering about. Apparently, they do need help (which makes me wonder how serious the big copyright holders actually are about fighting infringement). --] (]) 22:30, 3 June 2011 (UTC) | ::{{xt|doesn't need your help}} -- Hm, that's what I was wondering about. Apparently, they do need help (which makes me wonder how serious the big copyright holders actually are about fighting infringement). --] (]) 22:30, 3 June 2011 (UTC) | ||
:::The issues regarding copyright laws in the United States are quite complicated. You might check out ]'s '']'' for some great examples of that. It's a serious work written by a highly-respected law professor. It's available for free at his website. There are a lot of people who believe that the current state of US copyright law is an aberration brought about by excessive legislative pandering to a few major media companies. Whether you agree with it or not, you should be aware that a significant number of people online — including many who are very tech savvy — would see your site as a place for "snitching". --] (]) 23:01, 3 June 2011 (UTC) | :::The issues regarding copyright laws in the United States are quite complicated. You might check out ]'s '']'' for some great examples of that. It's a serious work written by a highly-respected law professor. It's available for free at his website. There are a lot of people who believe that the current state of US copyright law is an aberration brought about by excessive legislative pandering to a few major media companies. Whether you agree with it or not, you should be aware that a significant number of people online — including many who are very tech savvy — would see your site as a place for "snitching". --] (]) 23:01, 3 June 2011 (UTC) | ||
:::: Oh sure. As a practical matter, you might as well put up a giant sign that says "All hacker groups welcome to attack this site!". ] (]) 08:18, 4 June 2011 (UTC) | |||
::: Copyright violation is a civil matter between the copyright holder and the violator. I strongly question the motives of any 3rd party who wants to get involved. | |||
::: People don't snitch because they're interested in Truth, Justice, And the American Way. They do it either because they want to ''get'' someone they already are angry at, they do it some reword or favoritism from the authority they're snitching to, or they do it because they get a sick thrill from pretending to be an authority figure. In my opinion, any of those three makes you a bad person. ] (]) 08:18, 4 June 2011 (UTC) | |||
�:You could certainly set up the site as described — there's no bottleneck there. Whether people would want to participate, I have no idea. But I wouldn't expect it to make any difference in actual prosecution of copyright infringements. Again, the issue here is not that the companies cannot find instances of infringement. If you and I can do it within seconds of a Google search, so can they. If you can use a Torrent website, so can they. The issue isn't that the companies don't know that there are instances of infringement out there. --] (]) 21:52, 3 June 2011 (UTC) | �:You could certainly set up the site as described — there's no bottleneck there. Whether people would want to participate, I have no idea. But I wouldn't expect it to make any difference in actual prosecution of copyright infringements. Again, the issue here is not that the companies cannot find instances of infringement. If you and I can do it within seconds of a Google search, so can they. If you can use a Torrent website, so can they. The issue isn't that the companies don't know that there are instances of infringement out there. --] (]) 21:52, 3 June 2011 (UTC) |
Revision as of 08:18, 4 June 2011
Welcome to the computing sectionof the Misplaced Pages reference desk. skip to bottom Select a section: Shortcut Want a faster answer?
Main page: Help searching Misplaced Pages
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Misplaced Pages:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
May 30
Legally downloading System Shock 2
Having played newer games by Ken Levine, I'm interested in his older work. Are the 1994 game System Shock or its 1999 sequel System Shock 2 available for legal download (I'm totally not interested in Piratebay or anything of that nature)? I've looked on Good Old Games, Steam, and Impulse, but none have either. Used physical copies are available at online retailers, but at some very hefty prices. Incidentally I live in the UK, so services like GameTap aren't available to me. Are there other online distribution channels I should search? TinyLittleRobot (talk) 00:12, 30 May 2011 (UTC)
Digression about Abandon-ware sites. |
---|
The following discussion has been closed. Please do not modify it. |
|
- I've boxed the above sidetrack. The question was for a legal way to download the System Shock games. The System Shock games are not abandoned works. They were published by EA, who seems to still be doing quite well. APL (talk) 06:16, 30 May 2011 (UTC)
- "Abandonware" is software that is no longer sold, not software whose original publisher has gone out of business. And being abandoned works in your sense (publisher out of business) would not affect the legality of downloading them, as Mr.98 explained. But I agree that the discussion of abandonware is probably useless to the original poster. -- BenRG (talk) 17:45, 30 May 2011 (UTC)
- We'd better stop discussing it, then. :) I don't know, I thought it might qualify as ethical (and safe), which might be what the OP really had in mind. Oh, and I agree with Mr.98's last remarks, entirely. Card Zero (talk) 18:03, 30 May 2011 (UTC)
- "Abandonware" is software that is no longer sold, not software whose original publisher has gone out of business. And being abandoned works in your sense (publisher out of business) would not affect the legality of downloading them, as Mr.98 explained. But I agree that the discussion of abandonware is probably useless to the original poster. -- BenRG (talk) 17:45, 30 May 2011 (UTC)
- I googled "system shock" purchase and found that it's listed as a used item through amazon.com for about US$35. Comet Tuttle (talk) 16:39, 31 May 2011 (UTC)
- He clearly asked for a download purchase option. I don't blame him, physical media is a pain in the neck to manage. i kan reed (talk) 18:44, 2 June 2011 (UTC)
- Actual data! This gamasutra article from today quotes a guy from "Good Old Games" as saying with regard to System Shock and Syndicate, "the rights to those games are scattered between a lot of people, so it's quite a huge legal puzzle." System Shock 2 was not mentioned, but it seems likely that its rights situation is going to be similar or identical, so to me it sounds like it's not going to happen soon. Comet Tuttle (talk) 21:18, 2 June 2011 (UTC)
Making Add-Ons for Firefox
Would it be possible to write my own add-ons for Firefox? For my own use principly. What language should I use? Are there any guides available for doing this? Thanks 2.101.10.190 (talk) 10:19, 30 May 2011 (UTC)
- There are some guides and tools available on the developer section of the addons site AvrillirvA (talk) 10:44, 30 May 2011 (UTC)
Date and time format in Numbers
Is there a way to set the default date and time format in Numbers on my iPad? I have tried selecting a block of cells and formatting it to 5/29/11 and 7:00PM but each time I reopen the spreadsheet it goes back to May 29 2011 and 19:00:00. I love my iPad but this is pissing me off to the point that I want to throw it out the Windows, ha ha. I have looked through the 200 plus page Numbers guide but it is less than helpful. If this is an example of the ease of use and switching to Apple from MS, I'm not convinced. — Preceding unsigned comment added by 64.234.6.175 (talk) 14:30, 30 May 2011 (UTC)
Computer related post on Misc Desk
See Misplaced Pages:Reference_desk/Miscellaneous#Please_help_-_how_to_stop_pornographic_content_and_popups_on_a_computer. Exxolon (talk) 15:59, 30 May 2011 (UTC)
Chrome / Firefox plugins For Poor Connections?
Are there any plugins for Firefox or Chome that can help with Internet connections that are sporadic? Something that would automatically keep refreshing a page until it loads? --CGPGrey (talk) 16:48, 30 May 2011 (UTC)
- I'm not sure if you need plugins or just to adjust some of their base settings, such as the number of times to retry. StuRat (talk) 02:53, 31 May 2011 (UTC)
- you can also set the timeout I'm fairly sure, setting it longer may help it not display a "this page cannot be displayed" when it takes more than some arbitrary length of time to load. Of course it can make disconnects harder to detect in the process HominidMachinae (talk) 07:52, 2 June 2011 (UTC)
How to get max memory bandwidth throughput on a Tyan S4985-E
I have a quad socket S4985 MB from Tyan (link:http://www.tyan.com/product_board_detail.aspx?pid=554)
All sockets are populated with Opteron 8384 cpus (2.7GHz).
I have 24 GB of ram divided into
16GB = 8 x KVR800D2D8P6 x 2GB (Kingston 800mhz 2 GB DDR2 ECC registered Dual rank x8 DIMM's) Chips on both sides of DIMM
8GB = 8 x KVR667D2S8P5 x 1 GB (Kingston 667 1 GB DDR2 ECC registered single rank x8 DIMM's) Only chips on one side (SPD modified to run 800 MHz)
So all RAMM is running and identified as 800 MHz ram.
How should I populate the DIMM sockets to get the best Windows Gaming performance out of my selection of DIMM's?
I run a Dreamspark version of Windows Server 2008 R2 and various Linus distro's.
I was thinking to run cpu 0 and 1 with 8GB each (4 Dimm's dual rank) and cpu 2 and 3 with only 4 GB each (4 Dimm's single rank), but would I benefit, and in what ways, if I gave each CPU 6 GB ?
That would then be a two KVR800D2D8P6 (4 GB) in one memchannel and two KVR667D2S8P5 (2 GB) in the other memchannel pr. cpu
I've read about memory ranks and x4 vs x8 configs but I'm wondering what would be the most optimal configuration with this 24 GB.
Suggestions with explanations are welcomed :-) 85.81.121.107 (talk) 16:58, 30 May 2011 (UTC)
To add to the question - I'm not supposing to edit any chipset DARM / NB registers to optimize throughput, I just want to know if it makes sense to have the Opteron use 2 dual rank + 2 single rank dimms (6 GB distributed among all cpus). Or if I'd be better of populating cpu 0 and 1 with only dual rank ram ((GB each) and let the two other cpu's run just single rank (4 GB each). 85.81.121.107 (talk) 18:25, 31 May 2011 (UTC)
Reading here : LINK: http://www.nor-tech.com/solutions/intel/dox/DDR2%20advantages%20for%20dual%20processor%20servers.pdf page 3
1. Utilization of all four ranks per channel is always preferred for optimal performance; performance should not differ based on whether four ranks are spread over four DIMMs or four ranks are consolidated onto two DIMMs. 2. Identical DIMMs—Dual Rank (DR) or Single Rank (SR)—in one system are preferred because the system’s chipset can equally distribute memory addresses. Configuration with DIMMs of different ranks (i.e., mix of SR and DR DIMMs per channel) will offer sub-optimal performance versus identical DIMM configurations. If a memory upgrade is required, then the upgrade of all DIMMs is preferred over partial upgrade.
The above (1.) suggest that best performance is obtained by running dual ranks per channel (four ranks), so I would never get optimal performance on the cpu's in which I populate the DIMM sockets with single rank memory, because the opteron can take four ranks of memory per memchannel (8384 is capable of running dual channel = 8 ranks) ? 85.81.121.107 (talk) 18:38, 31 May 2011 (UTC)
Get toner levels on laser printer
ResolvedI have an HP CP1215 color laser printer. I recently got a warning that the yellow toner was getting low. I want to check the toner levels (it must have that information), but I can't figure out how to do it. I go to "devices and printers" on the control panel (Windows 7), but I don't see anything there that shows the toner levels. I downloaded a diagnostics program from HP, but it doesn't show the toner levels either. The printer is out of warranty, so there is no free tech support from HP. Surely there must be a way to check the toner levels, but I can't find it. How can it be done? Bubba73 18:27, 30 May 2011 (UTC)
- See HP Color LaserJet CP1215 and CP1217 Printers - Checking Toner Levels. ---— Gadget850 (Ed) 18:29, 30 May 2011 (UTC)
- Thank you - I should have googled on those words. Bubba73 22:13, 30 May 2011 (UTC)
Mac Cocoa Multitouch
ResolvedI've been trying to add a swipe event to my application, and I have implemented my swipeWithEvent: method, as seen below. My question is, now how do I make it do anything? Should I connect it to something in Interface Builder, or something else? I Appreciate your help.
- (void)swipeWithEvent:(NSEvent *)event { CGFloat x = ; if (x == 1) { //do something; } }
--Thekmc (Leave me a message) 18:46, 30 May 2011 (UTC)
I figured it out, so I will answer my own question, just in case anyone else needs it later:
- Subclass NSWindowController
- Put the code I listed above into the subclass
- Put the subclass in Interface Builder
- Finally, control drag from the blue box to your main window, and select "window" from the little menu that pops up
I hope this is helpful to somebody. --Thekmc (Leave me a message) 01:45, 31 May 2011 (UTC)
- The documentation included with XCode provides tutorials in the proper use of Interface Builder, including how to properly design Controller classes and link them to subclasses of standard GUI classes. Online, you can also use Interface Builder Help for Mac from http://developer.apple.com. Nimur (talk) 23:43, 31 May 2011 (UTC)
software that will fix things in video
I am looking for software that will fix certain things in a video file. I transferred family 8mm film to DVD. Much of it is underexposed. A lot of it needs color correction. About half of it needs speed correction. And taking out camera shake would be good too. The needs (in order) are:
- automatically adjust the brightness (this is most important, much of it is underexposed)
- correct the color
- adjust the speed
- stabilize the image
Yesterday I spent 85 minutes on the Adobe Photoshop page and Pinnacle Systems page. I know Pinnacle can adjust speed and probably both can stabilize the image (reduce camera shake, but I'm not sure). I can't find anything about adjusting for underexposure and correcting color. Neither will give technical support unless you own the product. Roxio and Nero also might have software that does need.
Does anyone know of software that will do these things? Bubba73 18:49, 30 May 2011 (UTC)
- You are looking for a full-featured video editing suite. You have some free/open source options: Virtualdub and Avidemux, but you probably will be best suited with the commercial options: Adobe Premier, Apple Final Cut Pro
- Will those last two adjust the brightness and correct the color? I spent quite a bit of time on the Adobe website, and it didn't say. Bubba73 03:37, 3 June 2011 (UTC)
- I'd rather pay a few dollars and get commercial product with documentation and support, but the Adobe product is probably too expensive. Bubba73 20:55, 3 June 2011 (UTC)
Firefox problems
Hello. I am using Firefox 3.0 on a computer that hasn't been used for a while. It works OK for the most part, but some websites (notably Facebook) don't operate properly with it. I also have IE installed on my computer, but when I try to use it, it says it can't connect to anything. And when I installed the newest version of Firefox, it also said it couldn't connect to anything. What am I doing wrong? — Michael J 19:43, 30 May 2011 (UTC)
- Are you running any kind of firewall? If so is it configured to let Firefox/IE correctly connect to the internet? Exxolon (talk) 20:20, 30 May 2011 (UTC)
- I don't know. I never turned on or installed a firewall, but it may be. How do I check this? — Michael J 20:45, 30 May 2011 (UTC)
- What operating system are you using? Exxolon (talk) 22:03, 30 May 2011 (UTC)
- Windows xp. — Michael J 23:48, 30 May 2011 (UTC)
- Some versions of Windows XP have a built in Firewall - check your control panel, look for the Windows Firewall icon and open it. Turn the firewall off and see if that fixes the issue. If it does you have a misconfigured firewall, try turning down the security settings one step at a time and see if it works. If this doesn't work, your problem is something else. Exxolon (talk) 23:55, 2 June 2011 (UTC)
- Windows xp. — Michael J 23:48, 30 May 2011 (UTC)
- What operating system are you using? Exxolon (talk) 22:03, 30 May 2011 (UTC)
- I don't know. I never turned on or installed a firewall, but it may be. How do I check this? — Michael J 20:45, 30 May 2011 (UTC)
Speech recording
Which of these formats is the best one to record teacher's speech in a classroom?
- WAV PCM 8 bit
- WAV a-Law 8 bit
- WAV u-Law 8 bit
- WAV PCM 16 bit
- MP3 96kbps
- AMR
Slijk (talk) 20:56, 30 May 2011 (UTC)
- Is this homework? Did you read the articles on pulse-code modulation, A-law algorithm, μ-law algorithm, MP3, and Adaptive Multi-Rate audio codec? -- BenRG (talk) 00:34, 31 May 2011 (UTC)
- Depends on device. This list looks like smartphone application supported formats. I have tried AMR, MP3 and unknown type of WAV on nokia N70. AMR ir heavily compressed and loses some information (maye can be fixed wit good external microphone), MP3 would be good, if CPU could keep up (CPU in N70 could not), wav would be OK, but file sizes will be large (although 8bit 8kHz would be only 64kbps). -Yyy (talk) 07:34, 31 May 2011 (UTC)
- Yes, these are formats from ALON dictaphone. Strange thing is Yyy, that I can't play those WAV files on both my phone and PC. Also, like you said, AMR is heavily compressed, however I don't think it differs much from mp3 96kbps... Slijk (talk) 23:16, 1 June 2011 (UTC)
- AMR is only 16kbps. Well, i have not tried anything besides AMR, much, because back then mp3 did not work well and i cannot remember, why i did not try wav. If MP3 works for you, then probably it is the best choice (if it works). Maybe problems with amr was caused by noise (phones microphone is not designed for picking up sound from 10m away).
- These wav files cannot be played even by application, which recorded them? All variants does not work? (pcm 8bit, a-Law 8bit, etc.) Does these works in vlc? (vlc supports many formats, but not amr, maybe it could open wav files with nonstandart/missing headers). -Yyy (talk) 11:36, 2 June 2011 (UTC)
- Yes, these are formats from ALON dictaphone. Strange thing is Yyy, that I can't play those WAV files on both my phone and PC. Also, like you said, AMR is heavily compressed, however I don't think it differs much from mp3 96kbps... Slijk (talk) 23:16, 1 June 2011 (UTC)
typing foreign characters
Im using Windows XP, how can I be able to type foreign characters using my keyboard? The response I usually get is by using Unicode, but I don't really know how to imput those either. 72.235.230.227 (talk) 21:43, 30 May 2011 (UTC)
- Some thoughts:
- 1) Unicode typically involves entering escape characters (somebody else can supply the details). So, it might be a good option if you only want to occasionally enter foreign language characters.
- 2) Cut and paste can also work (if you tell us what language you want, we can probably find you their alphabet for cut and paste purposes). Again, only good for occasional use.
- 3) An on screen keyboard, in the other language, might be an option if you need to type a few more foreign characters than in the above cases.
- 4) Another option is to change your keyboard mapping to give you different characters. This would work if you want a language with about the same number of characters as English, but not for oriental languages that have thousands of symbols. Of course, this option means you lose your ability to type with English characters, until you switch back. So, this would be a good choice only if you need to do extensive writing in the other language. You might also want to buy a new keyboard in that language, or at least get stickers, so the letters on the keyboard match what you get on the screen.
- Please let us know the language you want and give us some idea of how much writing you need to do in that language, so we can help further. Also, what's your intent with this foreign writing, do you want to print it out, send it in emails, etc ? StuRat (talk) 02:45, 31 May 2011 (UTC)
- I just want to be able to type foreign characters with ease without having to copypaste every individual character, such as Japanese, Korean, or Russian characters. Copypasting every individual character takes a bit too long for me 72.235.230.227 (talk) 08:27, 31 May 2011 (UTC)
- Install the correct Input method editor (IME). I have the Japanese IME installed on my laptop and can type Japanese phonetically. Astronaut (talk) 10:03, 31 May 2011 (UTC)
- Here's the official "how-to" from Microsoft: How to change your keyboard layout. Following these instructions, you can set up and swap between multiple keyboard layouts, including keyboards for different languages and character-sets. Nimur (talk) 23:52, 31 May 2011 (UTC)
- This is the easiest way to do it with Russian. There is usually a keyboard called "Russian phonetic" as well, which is a lot easier for someone who knows how to type on QWERTY (it maps Russian letters onto their approximate English homophone match — so zap becomes зап. --Mr.98 (talk) 15:59, 3 June 2011 (UTC)
- Input method editors and the like are completely unecessary for languages that are mostly roman alphabet characters. Chances are all you need to do is google "alt codes" and hold down the alt key while entering the code on your numpad. Simple, easy, and works no matter your PC (for instance if you're on a school computer they won't let you add software to) HominidMachinae (talk) 07:54, 2 June 2011 (UTC)
- Which doesn't help with any of the languages the OP mentioned specifically. --Mr.98 (talk) 15:59, 3 June 2011 (UTC)
.aspx?
A webpage offers a PDF to download from a link, but all that is actually downloaded is myfile.pdf.aspx instead of the myfile.pdf. Is there anyway to obtain the pdf itself? Or has the website just made a programming error? I'm using the lastest version of Firefox and WinXP. Thanks. 92.24.191.98 (talk) 22:54, 30 May 2011 (UTC)
- Try opening the .aspx file in a text editor. It may contain a link to the PDF in question. You could also, of course, just change its extension to "PDF" and see if that fixes it. In any case, it sounds like a definite problem with the site's coding — it is sending (or failing to send) a PDF with the correct encoding. --Mr.98 (talk) 23:02, 30 May 2011 (UTC)
- I had this exact same problem (with FF and IE on XP) whilst trying to complete my tax returns on SARS's website. It kept downloading .aspx text files instead of opening the PDF's in the browser. The problem was only solved when I downloaded and installed Adobe Reader and set my browser to open pdf's in the browser with Adobe instead of attempting to download them. Previously I was using Foxit reader; I now run both, with Foxit set by default to open and read pdf's on my hard drive whilst Adobe "catches" all my browser clicks and opens pdf's in-browser, including those referenced by .aspx files. No other combination of browser setting and pdf software worked for me, so I'd say just installed Adobe Reader 10 and get on with it. Zunaid 16:37, 1 June 2011 (UTC)
- This is common with improperly MIME-typed file outputs from web applications. It IS a working file (most probably), just has the wrong extension. Change the file extension to .pdf and it will work. --rocketrye12 20:14, 2 June 2011 (UTC)
May 31
Extraction of Audio from .wmv
Video File
Hello Everyone,
I have a .wmv
video file, and I want to extract just the audio from the the file, and save it preferably as an .mp3
audio file. Does anyone here know which freeware or free software can do this?
Thanks as always. Rocketshiporion♫ 05:46, 31 May 2011 (UTC)
- MediaCoder should do this; and Handbrake might (haven't tried it with WMA files). I don't think this works if the source is protected by DRM, however. --Kateshortforbob talk 10:53, 31 May 2011 (UTC)
- MEncoder and Avidemux are two more. I just tested the freeware version of AoA Audio Extractor and it also works very well, and with an easy to use GUI AvrillirvA (talk) 11:16, 31 May 2011 (UTC)
- I've downloaded and installed AoA Audio Extractor, and it works well. Thank you, AvrillirvA! Rocketshiporion♫ 13:01, 31 May 2011 (UTC)
- MEncoder and Avidemux are two more. I just tested the freeware version of AoA Audio Extractor and it also works very well, and with an easy to use GUI AvrillirvA (talk) 11:16, 31 May 2011 (UTC)
- AKME FFmpeg has worked well for me.... Kingsfold (Quack quack!) 18:53, 31 May 2011 (UTC)
vst plug-in
Does anyone know of a vst plugin (free) capable of real-time boosting a range of frequencies by a given amount of db? Say 0-90hz by 6db.
Can't seem to find any that do this.
Thanks. — Preceding unsigned comment added by 77.35.19.235 (talk) 07:23, 31 May 2011 (UTC)
- If by "vst" you mean Virtual Studio Technology then I guess you need a graphic equaliser plugin. Searching Google for that finds Voxengo's MarvelEQ, which is free. -- Finlay McWalter ☻ Talk 20:37, 31 May 2011 (UTC)
- Your DAW will almost certainly cater for this, whether it's a built-in function or plugin included with the software. In this case, you'll want to use a low shelf filter and set it to boost below a certain 90 Hz by 6 dB. If you need any help finding how to do this in your software, I'm happy to help. However, as you refer to VST (which is a trademark of Steinberg) I presume you're using Cubase or Nuendo, in which case you'll find the EQ in the left hand panel. This Sound on Sound article gives an overview of how to use EQ in Cubase. matt (talk) 21:41, 3 June 2011 (UTC)
how to remove old mobo/processor information from a WinXP installation?
I changed my mobo and processor from a very old Intel 845GLLY + P4 to an Asus board + Dual Core. I keep my old hard disk. Can I run the old installation on this newly configured system? When I switch on the system the blue screen of death only comes. Can I remove some information from the old installation using Puppy Linux or some other tool so that the windows XP would run on the new system? --117.253.191.4 (talk) 08:32, 31 May 2011 (UTC)
- If you have your XP installation disk, boot from that and try doing a repair installation. I will warn you if it is going to wipe your hard drive - in which case, abort the install and use Puppy Linux to back up your stuff to another disk/USB stick/memory card/etc. You can then try the repair again. Having changed the motherboard, it is likely you will have to re-activate your XP installation. That process is fraught with difficulties if your XP install disk is an OEM version. Astronaut (talk) 09:58, 31 May 2011 (UTC)
Specific URL problem
If I browse to The Advertiser newspaper site (http://theadvertiser.com.au) , using Linux and Firefox, the website generally refuses to display anything, with the browser saying "Waiting for resources2.news.com.au....", however if I use Windows XP and Firefox (the same version as Linux) the site immediately displays correctly. Very occasionally the site will display correctly under Linux or more frequently, it will display badly formatted, and with most of the images missing. Does anybody know why this might be the case? Is it my Linux configuration? Other browsers running under Linux display similar results. Other newspaper sites work fine. Thanks! --TrogWoolley (talk) 11:05, 31 May 2011 (UTC)
- I only have Linux and it looks fine to me. It is probably some ad that is Windows-only. Add an ad blocker to Firefox to take care of that. I have to say that after looking at the page, I'm now rather upset that the American media hasn't said much of anything about the Australians killed in Afghanistan. -- kainaw™ 12:29, 31 May 2011 (UTC)
- Displays fine on Unbuntu/Firefox. Some other websites though... Astronaut (talk) 16:01, 31 May 2011 (UTC)
- I have Fedora 12 Linux and FireFox 3.5. The link directs me to http://www.adelaidenow.com.au/, which displays all OK. I only looked at the front page and didn't actually try to read any articles though. JIP | Talk 19:00, 31 May 2011 (UTC)
Automatic mounting on Linux triggered by device being already mounted?
I think I might have found a reason why the automatic mounting on my Fedora 12 Linux system keeps turning on and off. I just backed up my hard drive to my external hard drive, with the automatic mounting having turned on, and when it had finished and had unmounted the drive, I ran mount
with no parameters and it said that /lacie2
(a common mount point I use for the external hard drives and the camera, although the camera is an Olympus and not a LaCie) was still mounted. I unmounted it manually, and the automatic mounting turned off. I then plugged the camera in, and Linux didn't automatically mount it. I mounted it manually, and unplugged it without unmounting it. (This can cause problems if the file system has been changed, but I remembered this and didn't change it.) Sure enough, when I later plugged it back in with /lacie2
still mounted, the automatic mounting had turned on. What could be causing this? JIP | Talk 18:50, 31 May 2011 (UTC)
June 1
Reconnecting a bash session
Currently I am using PuTTY on a Windows box to log into an Ubuntu box via ssh. The bash shell is what's running. I used that bash shell to start a very lengthy job on the Ubuntu box. What will happen if I close PuTTY? Will the lengthy job be terminated? If not, is it possible to subsequently re-connect to that bash shell so I can continue to view the output of the job? Comet Tuttle (talk) 00:09, 1 June 2011 (UTC)
- Normally, on a Unix or Linux computer, when your login session terminates, all your jobs end. This is part of the general "contract" for a user-account, as compared to a root-privileged login-account or a daemon login-account; you are a user, and your programs belong to your login. (The fact that your login-session is remote, over an SSH connection, is just a "detail" - the same rules apply even if you're sitting at the console). Some methods exist to circumvent this contract: nohup, and/or disown, in BASH, allows you (or your BASH shell) to request the operating system to allow a job to persist even after you hang up (log-out). This will either persist your job after your login-session terminates; or change ownership of the process to a system daemon, or some other POSIX-approved method to persist your job. On some systems, some user-accounts are not permitted to disown processes; if your system-administrator forbids nohup or disown, you may have to ask permission for them to enable it. (Our nohup article is in somewhat abysmal shape; but you can run the manual page for your ubuntu box, man nohup, and read up-to-date information specific to your computer system. For disown, read the manual for BASH by typing man bash and searching for disown. (Use the key sequence :/disown). The two commands have subtle differences in behaviors. (For clarity and to appease the POSIX-pedants who frequent this desk: on Ubuntu, Debian, and most other Linux, disown does not change the uid for the process; it simply de-registers it from the shell job-list without affecting anything in the kernel process control block. This additional step is necessary on other POSIX kernels that will auto-kill user-jobs who are not owned by terminals or pseudoterminals). Nimur (talk) 03:09, 1 June 2011 (UTC)
- I don't think you should say "This is part of the general "contract" for a user-account". Many multiuser *nix systems have policies against leaving processes running when you log out, and the administrators may take measures to enforce the policy, but it's not part of the standard Unix security model any more than, say, a "no profanity" policy. -- BenRG (talk) 18:14, 1 June 2011 (UTC)
The usual solution for this is a tool such as GNU screen. --FOo (talk) 03:31, 1 June 2011 (UTC)
- I've used GNU screen before and would definitely recommend it for the OP's task, if they have the privileges to install it.--el Aprel (-facienda) 03:44, 1 June 2011 (UTC)
- Thank you Nimur for the complete and precise answer! And I'll take a look at GNU Screen at some point. Thanks! Comet Tuttle (talk) 06:11, 1 June 2011 (UTC)
- Just a note that I strongly support use of screen. I have some servers that I maintain regularly. I set up my bashrc to see if I have an active screen and, if so, connect to it. Otherwise, start a new screen. So, if I ever get cut off for some reason, when I SSH back in, I go right back to where I was. If I exit (which I have to do twice, once for screen and once to get out of SSH), it stops the complete session so I'm not using resources. -- kainaw™ 14:28, 1 June 2011 (UTC)
Red Hat Enterprise Linux 6
Hello Everyone,
I'm trying to obtain a copy of Red Hat Enterprise Linux 6, but for what's supposed to be FOSS software, it's mighty difficult to obtain. access.redhat.com
won't let you access the evaluation download unless you have a corporate email address (which I don't), and resellers will not sell the media kits on their own without a bundled subscription. Does anyone here know of any website from which i can download the .iso
files?
Thanks as always. Rocketshiporion♫ 11:02, 1 June 2011 (UTC)
- Odd. This page seems to let me create a personal login at Red Hat, without supplying a corporate email address. I assume I could then go on to download the free evaluation version of RHEL6 from here. Alternatively, have you considered Centos? It is a free, 100% binary compatible, copy of Red Hat, with no fussy restrictions on downloading... download and burn .iso file(s) then install without (IIRC) signing up first. It is currently at v5.6 but v6.0 is coming soon. Astronaut (talk) 11:13, 1 June 2011 (UTC)
- There's no problem in creating a personal account, but one can only download subscribed software with the personal account. A corporate account is required to download evaluation software. I've uploaded the screenshot flickr here. Rocketshiporion♫ 13:02, 1 June 2011 (UTC)
- Redhat is not marketed as free. It is a subscription service. If you don't pay for the subscription, you don't get the service. Due to the licensing of Linux, they are required to make the source code available - but that doesn't mean that they have to put a big "download me" link on their website. It only means that if you walk in their office and ask, someone will be able to burn a copy of Redhat on a disk for you - if you pay for the disk and the person's time. To get around all of that, Redhat heavily supports (with money, time, and other resources) Fedora. You can download Fedora very easily by going to http://getfedora.com (which does have a very friendly "Download Now!" button). -- kainaw™ 14:26, 1 June 2011 (UTC)
- The GPL is very specific. It requires that if a binary is distributed, its source-code must also be available from the distributor. That is not legally equivalent to "Red Hat must give anything to anybody who asks for it." If Red Hat doesn't give you free access to a particular binary, even one covered by GPL, Red Hat is not obligated to provide source for that binary, either. See this item on the official FAQ from Free Software Foundation: Does the GPL require that source code of modified versions be posted to the public?
“ | The GPL does not require you to release your modified version, or any part of it. You are free to make modifications and use them privately, without ever releasing them. This applies to organizations (including companies), too; an organization can make a modified version and use it internally without ever releasing it outside the organization.
But if you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL. Thus, the GPL gives permission to release the modified program in certain ways, and not in other ways; but the decision of whether to release it is up to you. |
” |
— General understanding of the GNU licenses |
- This stipulation (or rather, the legalese in the license that enforces it) is essentially the critical line-item that makes free software viable for commercial purposes; it is the reason why it can be very profitable to sell free software for a fee. Customers who purchase GPL-licensed software are under no obligation to hand it out to the rest of the world; but if they choose to do so, they must also provide source-code. Nimur (talk) 15:38, 1 June 2011 (UTC)
- RHEL being the way it is, I think I might go with the unencumbered CentOS. Rocketshiporion♫ 19:39, 1 June 2011 (UTC)
- You might also want to take a look at Scientific Linux, which also is a recompiled Red Hat Enterprise Linux. --NorwegianBlue 22:52, 1 June 2011 (UTC)
Recover images from formatted SD card
I've got an SD card that was accidentally formatted and not used since. Is there any free software for Mac that would allow me to recover the images on it? doomgaze (talk) 18:11, 1 June 2011 (UTC)
- PhotoRec, maybe. ¦ Reisio (talk) 18:58, 1 June 2011 (UTC)
- If the file areas haven't been overwritten then you'll likely be able to recover from it. If photorec doesn't work the type of tool you want is called a file carver, and image files are particularly easy to carve. There are many free and open source solutions available. Shadowjams (talk) 21:56, 1 June 2011 (UTC)
- If these images are highly valuable to you, I would recommend that you tread carefully with recovery tools as the act of attempted recovery can thwart more advanced recovery techniques. I have heard many many positive things about companies like DriveSavers that do removable media recovery in clean-rooms with very advanced microscopic-level techniques. They are expensive though -- it's going to depend on how valuable the images are to you -- but they are highly effective. --rocketrye12 20:09, 2 June 2011 (UTC)
- Clean room recovery is for Winchester hard drives that have suffered some kind of mechanical failure. This drive works fine, and it's not a Winchester drive, so a clean room isn't going to help.
- If these images are highly valuable to you, I would recommend that you tread carefully with recovery tools as the act of attempted recovery can thwart more advanced recovery techniques. I have heard many many positive things about companies like DriveSavers that do removable media recovery in clean-rooms with very advanced microscopic-level techniques. They are expensive though -- it's going to depend on how valuable the images are to you -- but they are highly effective. --rocketrye12 20:09, 2 June 2011 (UTC)
- If the file areas haven't been overwritten then you'll likely be able to recover from it. If photorec doesn't work the type of tool you want is called a file carver, and image files are particularly easy to carve. There are many free and open source solutions available. Shadowjams (talk) 21:56, 1 June 2011 (UTC)
- Reading from an SD card will not alter it in any way. The Mac OS might write a small amount of data to the card when it's inserted into the computer, but you can prevent that by write-protecting the card (if it has a write-protect tab—they all do, don't they?). Data recovery software, like PhotoRec, never writes to the device it's trying to recover data from.
- It's possible that professionals would be able to recover some overwritten data from a flash drive by reading from the flash chip directly, because of wear leveling. Other than that, there's nothing the professionals can do that you can't easily do yourself. -- BenRG (talk) 21:58, 2 June 2011 (UTC)
- "To protect drives and data from contaminant damage, DriveSavers performs all data recoveries in an ISO-certified cleanroom environment." ref I guess I was referring more broadly to Drivesavers' techniques as far as cleanrooms.
- And I've experienced a case where I've used the aforementioned tools to no success but drivesavers was able to recover the data.--rocketrye12 00:14, 3 June 2011 (UTC)
Can a multi-monitor rig be achieved with different output modes?
Hi, I've got a graphics card with two output ports, one VGA and one DVI. If I get a DVI to VGA adapter, can I set up a dual monitor rig? I'm thinking of buying another graphics card, this one with VGA, DVI, and HDMI. If I get a DVI to VGA adapter and an HDMI to VGA, can I set up a tri-monitor rig? --T H F S W (T · C · E) 18:55, 1 June 2011 (UTC)
- The manufacturers often aren't clear on whether you can use multiple outputs at once. So, that leaves you with trial and error. If it does work, you will probably just have a clone of the same image on each output. (You aren't likely to get a different part of the screen shown on each image.) StuRat (talk) 19:02, 1 June 2011 (UTC)
- How about one of the ATI Radeon series? --T H F S W (T · C · E) 19:33, 1 June 2011 (UTC)
- If you aresking if you can use a single video card for a tri-monitor set-up then the answer is you can. You need one of the AMD/ATI cards which support Eyefinity. In theory you need a card with a DisplayPort and two other outputs. If your display has anything other then a DisplayPort you need an active adapter. However according to some sources, as discussed in our article, if you card supports Eyefinity (basically most or all cards in the 5xxx and 6xxx lines) you may be able to use one or two HDMI/DVI-D (in a simplistic fashion they can be considered the same thing) combined with two or one analog/VGAs. It is unlikely you will be able to use 3 analogs since only 2 RAMDACs are included as part of the GPU and it is unlikely the manufacturer included a standalone. I also discussed this in more detail a few weeks back, check the archives. I suggest you ask in more depth for experiences, probably outside the RD, if you can't return the card. I tried but got no response. Nil Einne (talk) 20:38, 1 June 2011 (UTC)
- Re the first question, FWIW: I'm using an NVIDIA GeForce FX 5500 card with a DVI and a VGA port on my home PC. I'm using two monitors, one connected with a VGA cable, the other with a DVI cable (most monitors come with connections for both types of cables these days). Works flawlessly, both with Xp and Ubuntu (using the proprietary drivers). --NorwegianBlue 21:35, 1 June 2011 (UTC)
- Looking more carefully there seems to be a question embedded in the title as well. Generally speaking, the card doesn't care if you are using DVI-D and VGA or HDMI and VGA or whatever. Although I should clarify some cards may be limited to only one analog or only one digital output. However I believe most standalone cards still support a minimum of 2 analogs. And 2 digitals has also been supported by many cards for the past perhaps 4+ years either with two DVI-Is or nowadays perhaps a DVI-I and HDMI. (I think there have been at least 2 TDMS on the GPU for quite a long time, however many manufacturers may have thought it better to include one VGA and one DVI-D so people with a single monitor wouldn't need a converter and also some may have preferred one dual link DVI-D rather then a single link DVI-D.) IGPs may be different.
- However as per the archived discussion, nowadays some cards come with HDMI, DVI and VGA. In that case while I suspect you can use 2 VGA (using an adapter for the DVI-I), VGA+HDMI, VGA+DVI-D or DVI-D+HDMI (or with a converter DVI-D+DVI-D) it's not something I have experience with. (I would guess the most likely thing not to work is VGA+HDMI.) As per above, other then EyeFinity or professional cards (or the ancient Matrox cards), you are unlikely to get 3 simultaneous outputs.
- Also I disagree with StuRat. For the past 8+ years, most cards from ATI/AMD and Nvidia have supported dual output (meaning independent outputs not cloning). ( and Radeon R100) As I said, there may be some variance in/confusion over what sort of outputs you can use, but not in the support of dual output in some manner. (Can't speak of driver support in *nux.)
- Nil Einne (talk) 23:44, 1 June 2011 (UTC)
- There are graphics cards available, such as the nVidia Quadro NVS450, which support up to four monitors simultaneously. Rocketshiporion♫ 05:41, 2 June 2011 (UTC)
- One of those cards are great, for those who aren't financially challenged. So basically with the proper drivers, using HDMI > VGA and DVI > VGA adapters, I can set up a tri-monitor? --T H F S W (T · C · E) 19:15, 2 June 2011 (UTC)
Network Printer
At my work, we have an HP Laserjet color printer HP4000 that has a Netgear PS101 small printer server attached directly to the back, which is then plugged into an ethernet port in the wall. It is currently not installed to any computer. In order to install the printer, I need to figure out the IP address it is currently using. I've tried several options I found through web searches, and none of them have worked, so I thought I'd ask here for fun. To summarize, how can I find the IP address for that networked printer? --Mephisto275 (talk) 19:06, 1 June 2011 (UTC)
- You can use nmap to scan the local subnet. For example, nmap 192.168.0.0-255 The print server should report that it has printing-relevant ports (like 631 IPP) open. Make sure to tell your local network admin you're going to do this (if you are not he) as nmap scans can sometimes trigger internal security software (as they're often used by intruders). -- Finlay McWalter ☻ Talk 19:44, 1 June 2011 (UTC)
- In addition, the -O option to nmap will try to identify the host type. For a Netgear ADSL modem I tried it on, it reports the MAC address as being in a range registed to Netgear and the OS as being "MontaVista embedded Linux". -- Finlay McWalter ☻ Talk 19:50, 1 June 2011 (UTC)
- A NetGear print server should have a name printed on the label in the format PSnnnnnn. You should be able to use that name to open the built-in web server. ---— Gadget850 (Ed) 19:56, 1 June 2011 (UTC)
- Nmap is a great program, which did exactly what I was looking for as far as scanning the entire network for the printer. Thanks so much for brining it to my attention! Unfortunately, it didn't find the printer. Maybe there is something else going on that's keeping the printer from even accessing the network. We recently redid our main network switch, and it might be possible that the wall port for that particular office didn't get plugged back in. Is there anything else that would keep that printer from obtaining an IP address?
- Also, the printer server does have a "device name" that matches the format PSnnnnnn. I believe that it becomes the port name once that print server is installed. How exactly do you mean that I can use it to open the built-in web server?--Mephisto275 (talk) 16:17, 2 June 2011 (UTC)
- With regard to nmap: unplug your print server, run nmap (storing its output in a file), plug the print server in (make sure it's on, give it a couple of minutes to get all set up) run nmap again, and take the diff of the two runs. If there is no difference, something is wrong indeed. I think the PSxxx name to which Gadget850 refers is a Windows Internet Name Service name, and if its registration on the network works okay you should be able to point a browser to http://PSxxx and get the print server's web control panel. The manual for the PS101 is here (I'll have a read through that just now...). If this is one of those "look what we found in the cupboard" cases for you, it may be that the PS101 is in a specific configuration (for a network setup that no longer exists). In that case you can restore it to factory default condition with a recessed button on the back (#4 on their diagram, beside the 12V DC receptacle). -- Finlay McWalter ☻ Talk 16:34, 2 June 2011 (UTC)
- Does the printer itself have a network configuration page somewhere in the settings menu? Astronaut (talk) 11:03, 4 June 2011 (UTC)
- The printer does have a print configuration option, but it does not include an IP address in what it prints out. --Mephisto275 (talk) 16:43, 7 June 2011 (UTC)
HTML/CSS Table help
Resolved
Hello! Here's an example of a table layout that I'm trying to make for my personal website. As you can see, basically I want a two-cell, one-row borderless table, with an image in the right cell and text in the left. I'd like both cells to be the same width, but the (wrong) way I've been doing this is by adding <br />
to control the width of the text cell, otherwise the width extends all the way to the edges of the browser window. This is especially annoying to deal with this way when I take out and add parts to the text.
How can I force both cells to be the same width, or set my own width for both cells (obviously, the right cell isn't a problem, since I can control the size of the image)?- Also, as the table stands now, the text is centered vertically. How can I force the text to begin at the top of the cell and push unused space to the bottom, instead of distributing it to the top and bottom?
- How can I justify the text in the cell?
Thank you for your help.--el Aprel (-facienda) 19:20, 1 June 2011 (UTC)
- Never mind on 1 and 3. One can specify a height and width in
<td>
, and usestyle="justify"
. I'm still interested in 2, though.--el Aprel (-facienda) 19:34, 1 June 2011 (UTC)
- For #2, use
<td valign="top">
. Rocketshiporion♫ 19:47, 1 June 2011 (UTC)
- For #2, use
- Great, thanks!--el Aprel (-facienda) 20:06, 1 June 2011 (UTC)
ftfy ¦ Reisio (talk) 00:13, 3 June 2011 (UTC)
Change from Gnome to LXDE
Hello again! I have a ≈10-year-old computer running Debian lenny and Gnome. Although I have it boot up to the console, I occasionally use startx
and run Gnome, which is really slow because of low RAM. I'd like to change to something more lightweight, like LXDE. What configuration files do I need to change after I #apt-get install lxde
so that I when I run $startx
it brings me to LXDE instead? Thank you!--el Aprel (-facienda) 22:15, 1 June 2011 (UTC)
- It's a while since I used Debian, but I'm pretty sure there's a selection somewhere on the login screen where you choose which window manager you want to use. So after you install LXDE, I think that all you have to do is to boot it into graphical mode once, and then select LXDE from the login screen when you log in. Later, if you start up in console mode, it should be your default window manager when you run startx. --NorwegianBlue 22:46, 1 June 2011 (UTC)
- startx is a frontend to xinit, which is typically configured through either ~/.xinitrc or ~/.xsession. These files are simple shell scripts that are run as-is by xinit (and thus by startx). Some links on how to edit those files: . --Link 08:17, 2 June 2011 (UTC)
June 2
how to hack a web or blog
I want to create my own blog and website — Preceding unsigned comment added by Gerrymain (talk • contribs) 14:59, 2 June 2011 (UTC)
- You don't need to "hack" to make a blog and website. Why not start with something like WordPress.com or Blogger or LiveJournal and see where that takes you? --Mr.98 (talk) 15:30, 2 June 2011 (UTC)
- On the other hand if you're wanting to do this as a programming experiment, then "hacking" in the old fashioned sense of putting together code rapidly might be the right word, and the tools 98 mentions aren't going to work for you. Then it becomes a matter of how "deep" you want to go. Anywhere from using customizable blog software, to using a LAMP_(software_bundle) server to build a new blog application, to writing your own webserver. It's not clear where on that scale of possibilities you'd want. i kan reed (talk) 18:43, 2 June 2011 (UTC)
- You might also consider Google sites but as ikanreed points out we need to know a little more info to make a better recommendation--rocketrye12 20:01, 2 June 2011 (UTC)
- I thought google sites was closed a couple years ago... i kan reed (talk) 20:08, 2 June 2011 (UTC)
- 'Google Page Creator' AKA 'Google Pages' was closed, but Google sites is still going strong--rocketrye12 20:11, 2 June 2011 (UTC)
- I thought google sites was closed a couple years ago... i kan reed (talk) 20:08, 2 June 2011 (UTC)
Sync program that works between Win 7 and XP and works with networked drives
Can anyone recommend a program which I can use to sync files between my personal laptop and desktop and my work computer which uses networked drives (apparently networked drives are a problem for many sync softwares)? --129.215.47.59 (talk) 15:40, 2 June 2011 (UTC)
- FastCopy AvrillirvA (talk) 15:45, 2 June 2011 (UTC)
- You might consider Delta Copy, a rsync implementation on Windows. From a little bit of preliminary searching it looks like SMB {\\HOST....) connections should be fine. But I must ask, if both drives are networked, why must you sync them? Why not just point both to the same drive, i.e. one or the other? You also might consider an external RAID array or other SMB/NFS filesharing solution, depending.--rocketrye12 20:05, 2 June 2011 (UTC)
- Another option is AJC Sync. It's US$30. I know that works with networked drives. Or you can try the free SyncToy, which has less features than AJC. - Akamad (talk) 03:38, 3 June 2011 (UTC)
- Why not just use Windows Offline Files? --Phil Holmes (talk) 08:27, 3 June 2011 (UTC)
June 3
128 bit gaming
I've seen video games progress from 8 bit -> 16 bit -> 32 bit -> 64 bit. Why is it that the PS3 and XBOX360 remained at 64 bit, and the Wii regressed to 32 bit, instead of any of them moving to 128 bit and (later) 256 bit? Ballchef (talk) 02:58, 3 June 2011 (UTC)
- I suspect that the answer is that the benefits aren't worth the effort, or the hardware costs. Extra bits only really amount to extra precision in games - i.e. you can model things more accurately. What is needed most is usually to model things faster - so you can model more of them. AndyTheGrump (talk) 03:09, 3 June 2011 (UTC)
- I see! And by "more accurate" do you mean, for example, more life like animation? That would explain why the Wii has gone back to 32 bit. Ballchef (talk) 03:29, 3 June 2011 (UTC)
- The old practice of doubling the advertised number of "bits" each generation was purely a marketing gimmick. There was no aspect of the hardware that these numbers described, although there probably was always some N-bit bus somewhere that they could point to to justify their claim. If they wanted to call current consoles 256-bit or 1024-bit, they could find a way to justify that. But I suppose they decided that the large numbers were starting to look silly. I think that a few consoles were advertised as 128-bit. -- BenRG (talk) 04:30, 3 June 2011 (UTC)
- I have to disagree with your claim "There was no aspect of the hardware that these numbers described". According to orders of magnitude (data) the numbers referred to the "word-size" (instruction length) of the console. While I don't know what this means, it seems to have a purpose. Also you missed my point - why do they not continue to grow the "word-size" of these machines?. If andy the grump is referring to what i think he is, then they should so that the images and video of games look more realistic. Ballchef (talk) 07:31, 3 June 2011 (UTC)
- The word size of processors have not increased in recent consoles because it isn't necessary to do so. Regarding the relation between word size and realistic 3D graphics, 32-bit floating-point numbers are adequate for modeling 3D models so accurately that whatever distortion from the lack of accuracy is essentially indistinguishable — in my (very limited) experience, lack of accuracy only causes 3D geometry to do weird things (such as two objects separate from each other joining or overlapping when moved close enough to each other) when very fine scales are used. In games, such scenarios never happen because of what games are; they are not real-world simulations, and they don't let the player end up in such situations.
- I have to disagree with your claim "There was no aspect of the hardware that these numbers described". According to orders of magnitude (data) the numbers referred to the "word-size" (instruction length) of the console. While I don't know what this means, it seems to have a purpose. Also you missed my point - why do they not continue to grow the "word-size" of these machines?. If andy the grump is referring to what i think he is, then they should so that the images and video of games look more realistic. Ballchef (talk) 07:31, 3 June 2011 (UTC)
- Regarding the word size and instruction length, our article happens to be incorrect about this issue; the PlayStation 2 processor had 32-bit instructions. There is no relation between the size of words (quantities of bits that the processor can process as one piece of data) and the length of instructions. For example, the x86-64 processors in PCs have instructions that have a variable length, 8 to 120 bits IIRC, but the processors only operate of words up to 64 bits in length. Rilak (talk) 08:06, 3 June 2011 (UTC)
- That can't be right, because instructions on intel chips fall exactly at byte bounaries. It must go up to 128 bits. i kan reed (talk) 13:08, 3 June 2011 (UTC)
- 15 x 8 = 120. AndyTheGrump (talk)
- I can't help if I can't do elementry math anymore. i kan reed (talk) 14:02, 3 June 2011 (UTC)
- 15 x 8 = 120. AndyTheGrump (talk)
- That can't be right, because instructions on intel chips fall exactly at byte bounaries. It must go up to 128 bits. i kan reed (talk) 13:08, 3 June 2011 (UTC)
- Regarding the word size and instruction length, our article happens to be incorrect about this issue; the PlayStation 2 processor had 32-bit instructions. There is no relation between the size of words (quantities of bits that the processor can process as one piece of data) and the length of instructions. For example, the x86-64 processors in PCs have instructions that have a variable length, 8 to 120 bits IIRC, but the processors only operate of words up to 64 bits in length. Rilak (talk) 08:06, 3 June 2011 (UTC)
- First, Like BenRG says, those numbers don't always mean anything. For example, the Dreamcast was frequently advertised as a "128-bit gaming system", but It used a CPU with a 32-bit word size.
- Secondly, CPU word size isn't really an indication of CPU power. It primarily has to do with precision, and with how much RAM the CPU can use effectively. There are a few other things a larger word size is better for, but in general it's not really a big deal compared to other aspects of the CPU.
- Really, word size is a boring technical detail that advertisers and marking people latched onto for a while because it was an easy to understand number that kept going up. APL (talk) 17:37, 3 June 2011 (UTC)
- Word size is a boring technical detail, but it may have important ramifications to other things like maximum sustainable frame-rate, or maximum number of renderable polygons per second, which are more intuitive specs for a gamer to understand. The hard part of marketing digital electronic computer hardware is trying to figure out which of the six billion transistors are worth advertising: every part of a modern computer's hardware is used for something, and different architectures differentiate themselves in different ways. In previous generations, processor word-size was a good "umbrella" for a set of additional peripheral features, sort of like describing an automobile only by the number of cylinders in its engine. Technically, you could put a 12-cylinder engine on a tin can, but nobody sells high-end systems with low-end peripherals. Similarly, a 64-bit system "implies" that a set of additional hardware, including SIMD, multimedia extensions, advanced DMAs, and other boring technical details will also be included; consequently, the game features will be less limited by hardware.
- Anyway - specifically regarding 128-bits: you might find 128-bit buses all over your system; but this is a feature that is opaque to most game programmers, who tend to reside in application space nowadays. As hardware has become more sophisticated, games programmers spend less time tuning to hardware details, relegating this difficult task to hardware driver programmers and operating systems programmers. I think you will be hard-pressed to find any modern game-programmer who writes to the bare metal of her machine, without an operating system to hide the number of bits and other boring technical details from them. In the old days, when Nintendo programmers found ways to slam AI code into audio-processor ROMs and similar such trickery, it really mattered to performance if the system audio bus was 4, 8, or 16-bits; but these sorts of hardware hacks are essentially nonexistent on today's platforms. Nimur (talk) 18:15, 3 June 2011 (UTC)
(homework question, got it resolved by my prof, thanks!)
Multicolumn text flowing multipage sliding
Dear All,
Hi.. I need help… Multicolumn text flowing through multipage sliding, should calculate total column, gap, text size, image size for different browser. To be control with Previous Page & Next Page. At the ending of story the next Page sliding should stop. Please give me suggestions how to implement with CSS, javascript, jsscript, if any...
Best Regards, baski — Preceding unsigned comment added by Indianbaski (talk • contribs) 09:22, 3 June 2011 (UTC)
More MS Office Problems
Every time I try to open a document by double clicking on it, MS Word opens up as normal and as expected, however, without fail I get a message saying that Word cannot find the file and to make sure I've 'spelt the address properly'. When I then double click on the document again, with Word still open, it opens up fine. I've looked around for info on this, and was able to find someone advising others to 'add folder to trusted files' and stuff, which I've done for some folders - but it doesn't work for all the folders inside folders (it says it does, but it doesn't) and I don't like having to use a workaround for something that was not broken a short while ago. Does anyone know what I can do to fix this?
Also, possibly unrelated but, in recent weeks, any Word .doc file has been showing without its default square W icon. I've been getting an icon that looks like it should be .rtf or .xml or something. .docx files, however, show up fine with their default rounded W icon. Can I fix this?
I am not sure if it's relevant, but I have all the updates for Office 2007, plus Acrobat X Pro and TRADOS Studio 2009 & Multi Term addons. --KägeTorä - (影虎) (TALK) 11:48, 3 June 2011 (UTC)
- As for the appearance of .doc files in Windows Explorer, perhaps you installed some software that tells Windows that .doc files belong to it instead of to Word? I use Windows 7, but I think the following will also work on XP and Vista: Right-click one of the .doc files. On the "General" tab, it should say "Opens with: Microsoft Office Word". If it lists a different app instead, click "Change...", and then the "Browse" button if necessary, to change it back to Word. Comet Tuttle (talk) 23:05, 3 June 2011 (UTC)
common practices request
Hi,
I'm about to join a development "team" that had consisted of a senior developer, who was supposed to take me on for a while until I learned the ropes as a junior developer. Unfortunately, the senior developer has departed prematurely, and I will have no guidance whatsoever. The situation is a common LAMP setup, without a staging server or test server; instead the production process consists of work on two machines: development and testing on the developer's machine, and the production server. I did ask the senior developer about the process he uses on his own machine, and he uses Git for version control on his own machine (and, obviously, none on the production server). So, I hope to do the same thing. My request here is for a detailed description of the common process someone would use to maintain code with Git from their own machine to be then used on a production (single-server) LAMP environment. The code is not complicated, and the Git repository does not have several branches, just a single one. (Sorry if I'm misusing the terminology, I did not use this version-control yet). Could you, in particular, walk me through the process I would probably use? (The previous developer tests on his own machine using a VM). Is it something like this:
-> from the Linux shell (bash etc), check out (by hand, typing a checkout command), files to be edited -> Edit them in (alphabetically listed -) Emacs or Vim -> Check them back in -> ??? -> Test in my VM, which would get the files from the Git Repository? -> ??? -> ???
Basically, I'm a bit fuzzy on the workflow... However I am very motivated, and am sure if you can figure it out if you help on what you think would be the most common practice for this situation! Thanks so much. Also, the guy had only edited the HTML files by hand - no wysiwyg layout software or anything like that... would it be normal for a designer (which is more my background) to set one up to somehow use Git as well? How? --86.8.139.65 (talk) 18:29, 3 June 2011 (UTC)
- Using git on a personal machine like this is useful for removing changes. That is the sole reason for it. It isn't for backup - if he loses his machine, he loses the repository. It isn't for multi-user version control. He is the only developer. So, he is using it to make a change, demo the change, and possibly remove the change. He is certainly running a copy of the site on his own computer, which is where he views/tests changes. Once all is good, he just copies the files to the webserver. No need to have it load from git because it always has the latest version once changes are accepted. As for editing by hand, that just means that he knows what he is doing. Using a WYSIWYG editor is painfully tedious if you just want to make a site work. Anecdote: I developed our entire site here by hand. Any changes were implemented very quickly. No pain. No fuss. Then, the order came down from on high that all websites must go through SiteExec. It took weeks to move the existing site to SiteExec. It took weeks to agree that much of the site simply couldn't be implemented through SiteExec. It took weeks more to agree on how to downgrade anything of interest in the site because it was far to painful to make simple changes (such as fixing a typo in a person's name) through SiteExec. Now, the website is broken and outdated and everyone refuses to fix it. It is possible that some day in the distant future we'll be allowed to get rid of the WYSIWYG editor and be able to take control of our site again. -- kainaw™ 18:42, 3 June 2011 (UTC)
- Thanks for the highly relevant info! --86.8.139.65 (talk) 19:17, 3 June 2011 (UTC)
- It's difficult to know what "best practice" is without a better idea of the importance of the server/site to the company (e.g. what happens to the company if the site is down for two days? or if the last week of transactions on the database are lost?). Knowing how important it is, and what steps have been taken to resist loss or downtime due to system crashes, powercuts, fires, etc., will define how paranoid "best practice" needs to be. It sounds like the system justifies the time of a full-time developer; if that's true, it's very difficult to understand why it wouldn't justify a proper staging server and a battery of tests to be done there before changes are pushed to the live server. -- Finlay McWalter ☻ Talk 19:36, 3 June 2011 (UTC)
- Thanks, also extremely useful. Without further access to the previous developer on my part, could you provide any other guidance you think would be useful for me? Other than git and emacs, what do you guess the previous developer would have been using? I am kind of in the deep water on this one, as I will basically just receive access (passwords) and then am on my own, despite being only a junior developer. So anything you can throw my way would be extremely useful from this perspective. As for your questions, it is a tiny company on the brink of collapse and I'm working for peanuts with the hope of proving myself. (However, I had thought I could do so with some guidance from the senior developer, who was not paid in full and so has completely stepped down early; basically he will just give me passwords, "handing me the keys")... So, this is obviously not an ideal situation, and much more akin to an internal (intranet) site than a customer-facing one, but shit happens and I'll try my best. I hope you will have more guidance for me! Thanks so much. --86.8.139.65 (talk) 19:55, 3 June 2011 (UTC)
- incidentally, and this could be superfluous information, but there's nothing wrong with the business model (customers are coming and paying, the phone's ringing) it's just internal organization issues between the founders, all but one of which has moved on to something else, that has led to this situation. There's nothing wrong with the prospects of the company, it's just that it could really use active development again, which hasn't happened in several months. so, all of the suggested information above about a staging environment is quite helpful (though I might not be the person to do it) but as for putting out the fire right now, I'm really looking for the most practical things I will need to "pick up the reins" so to speak. Thanks for anything like that which you could suggest! By the way, we do have test scripts, so it's not as bad as it could be :) --86.8.139.65 (talk) 19:59, 3 June 2011 (UTC)
- (edit conflict)I see, it's trial-by-fire for you (and it sounds like you're healthily philosophical about it; after all, firefighters only learn how really to be firefighters by fighting real fires - just don't get burned). Don't assume that the departing guy did things in a terribly proper way; I've seen plenty of places where someone would just login to the hosting machine and edit files with emacs, and if this made the running site go wonky a bit during the process, they didn't care. If I were you, I'd first make sure there are backups and replications going on as best you can, so that if a piece of hardware (like a disk) dies you can get things back - the trouble is that, even though you know that only a properly configured redundant system with tested failover and fallback mechanisms, with fault tolerant raid arrays and offsite backup storage - when something dies it'll be you on the hook to get things working, and you who gets the blame when the site is down for a week while you're building a new server. It's especially stressful as the new guy, because eventually you will break something (or become so paralysed by fear that you'll become so change-averse that you won't get anything done). In the long term, a decent web app ends up being built from templates (either official templating-system templates or things cobbled together in php) rather than being hand-written code or the output of web-design programs. So you'd hope that departing-guy was using a template engine of some kind to build the site (even if it's just static content); sooner or later you'll be asked to rearrange the menu on every page (or something that, to management, seems like a trivial request) - if the site as 100 pages then you'll kill to be able to edit just one template and rebuild. As to workflow - I've heard of places that do something like this:
- You pull the tree on a developer's machine, implement and tests changes, and then commit them back to the tree with a tag
- On a staging server (I guess a staging VM for you; you want it as close to the production environment as you can) you pull that tag and run acceptance and module tests, as appropriate. If the site we're talking about is really an app (and not just some static pages) then you'd really hope the guy has written a bunch of tests for it (and if he hasn't, maybe it's a valuable thing for you to do). In a sensible organisation, where the app is reasonably mission-critical, you often get a sign-off process at this point, where a manager or another developer approves the patch (and that approval gets recorded in some document somewhere, so if things go wrong then the blame is carried by everyone involved, not just some lowly frightened junior developer pushing his first patch to a site he barely knows).
- Only then, on the live server, do you pull the tag. Depending on your app, this itself can be a scary proposition, as you need to think about what happens to the traffic that's on the site while you're making that patch (which, if it's more than one file, isn't entirely instantaneous). It's common for sites to have two or more web-facing servers, and when they push a change they set the load-balancing system to send no new requests to that machine; once all its current business is done and no live transactions are outstanding on it, you push the changeset (confident that there won't be "skew"). Then the load-balancer sends traffic to that one and you repeat for each customer facing server (often with a delay, so you can back-off if a change makes a server overload or crash.
- If you don't have a load-balancer and multiple servers (which it sounds like you don't) then you either need to briefly take the site down to make a patch safely (which may be okay, depending on your business) or you need to find out how to do an atomic-update on a live site - how to do this depends on the site runtime. Some runtimes (e.g. many Java servet containers) do it automatically - they can keep both the old and new versions around, and only purge the old when all the traffic that was using it is done. Don't assume, of course, that the departing guy thought very hard about this stuff. -- Finlay McWalter ☻ Talk 20:25, 3 June 2011 (UTC)
- (edit conflict)I see, it's trial-by-fire for you (and it sounds like you're healthily philosophical about it; after all, firefighters only learn how really to be firefighters by fighting real fires - just don't get burned). Don't assume that the departing guy did things in a terribly proper way; I've seen plenty of places where someone would just login to the hosting machine and edit files with emacs, and if this made the running site go wonky a bit during the process, they didn't care. If I were you, I'd first make sure there are backups and replications going on as best you can, so that if a piece of hardware (like a disk) dies you can get things back - the trouble is that, even though you know that only a properly configured redundant system with tested failover and fallback mechanisms, with fault tolerant raid arrays and offsite backup storage - when something dies it'll be you on the hook to get things working, and you who gets the blame when the site is down for a week while you're building a new server. It's especially stressful as the new guy, because eventually you will break something (or become so paralysed by fear that you'll become so change-averse that you won't get anything done). In the long term, a decent web app ends up being built from templates (either official templating-system templates or things cobbled together in php) rather than being hand-written code or the output of web-design programs. So you'd hope that departing-guy was using a template engine of some kind to build the site (even if it's just static content); sooner or later you'll be asked to rearrange the menu on every page (or something that, to management, seems like a trivial request) - if the site as 100 pages then you'll kill to be able to edit just one template and rebuild. As to workflow - I've heard of places that do something like this:
User DMCA report system
Does anybody know if there is some sort of DMCA reporting system for users/fans?
The thing is, the DMCA demands that copyright infringements be reported only by the copyright owner or an authorized person. This in turn makes it impossible to report even the most blatant copyright infringements on a third-party website (e.g. at a filehosting service).
If there were some centralized website where users can report copyright infringements to the respective owners, that would go a long way towards fighting copyright infringement, much of which today takes place via filehosting services and online forums.
Registering at that supposed central DMCA reporting website would preferably be organized such that each user's account is tied to their real-life identity, to prevent abusive flooding of the report system. This could be achieved e.g. via unique email-addresses (from official bodies like universities etc.).
In my imagination, that reporting system would feature a very simple interface with just a couple of fields to enter the name of the work and artist, and the url(s) where the copyright infringement takes place (which may involve several places, e.g. a file at one or more filehosting services plus meta-search engine entries pointing to that file).
I'm pretty sure that for any given work, there would be at least one dedicated fan somewhere in the world, eagerly willing to scour the depths of the internet on behalf of their idol(s). So it would probably work even without any monetary incentive.
Does anybody happen to know if such a thing was ever tried or talked about? I believe it does make sense, even if it could not fight all copyright infringement, it may help to cut down on a lot of it, esp. at the most easily accessible places (flash streaming websites, filehosting services, forums).
Alternatively, I'd be glad if anyone could enlighten me about possible flaws and/or improvements to this idea. --195.14.222.40 (talk) 18:36, 3 June 2011 (UTC)
- Firstly, a DMCA claim is that it's a legal assertion (made under penalty of perjury) that a given work is being distributed outwith its lawful licence. Only the copyright owner, or their agents, knows for sure what is or isn't being used unlawfully. Secondly this is a civil dispute, and general principles of civil law limit actions to those who have "standing" in the matter; courts aren't interested in random filings from any old person, and third party DMCA claims are equally meaningless. And thirdly, in making a DMCA claim a party accepts certain liabilities, particularly if their claim turns out to be false - those parties to whom the claim is sent may be able to recover the costs involved in servicing the claim (which might include expensive legal bills). -- Finlay McWalter ☻ Talk 19:26, 3 June 2011 (UTC)
- (edit conflict) Um, maybe I wasn't precise enough. My idea involves a website operated by a conglomerate of voluntarily participating copyright owners and/or legal representatives, who would examine the filed infringement reports and then decide to respond themselves. Any report would be forwarded to the respective work's owner, and leave everything else up to them.
- It would simply be a quick, easy, streamlined and most importantly centralized way for independent users to bring copyright ingringements to the respective owner's attention.
- Again: No reports would be filed to third party websites, let alone courts. It would be a friendly heads-up along the lines of "Hey guys, methinks that's a copyright infringement of one of your works over there at that url". And if a user files too many faulty reports, that user could simply be banned. --195.14.222.40 (talk) 19:38, 3 June 2011 (UTC)
- It's perfectly reasonable, however, for the fans of celebrity X to report what they imagine are copyvios of X's stuff to X or X's agents. Armed with that, X can file a DMCA claim if they wish. It wouldn't be a bad idea to set up a little company that allowed fans to report such claims (for subscribing celebrities) back to the corresponding agent. None of that involves legal filings made by anyone other than X and their representatives, so there's no problem with it. -- Finlay McWalter ☻ Talk 19:28, 3 June 2011 (UTC)
- Yes, that's my exact idea. So something like that doesn't already exist? Isn't that kinda weird? It seems like such a simple idea, which just occured to me while doing the dishes. I can't be the first one to come up with something like this. --195.14.222.40 (talk) 19:40, 3 June 2011 (UTC)
- It reminds me of the Business Software Alliance. --Mr.98 (talk) 19:50, 3 June 2011 (UTC)
- There are already several companies that conduct automatic scans of video networks, file sharing networks, and the like, and are sometimes empowered to act as agents of copyright owners (and so to issue DMCA or other proceedings). They're also the ones that collect info for litigation against individual file-sharers, and sometimes they contaminate searches or corrupt file-sharing networks with defective content or wonky peers. It'd be they, rather than X, who would likely run (or buy) your crowdsourced service (and only if it could outperform their army of automated agents). The downside of that is that such chaps have gained a reputation for heavyhandedness (in some cases bordering on malfeasance) leading to them keeping as low a profile as they can. Crowdsourcing works when the well-intentioned outnumber and out-effort those opposed to the effort by a wide margin. I don't think your model has been tried on a large scale, and like all untried business models no-one can say it'll work, or that it definitely won't, until it has been tried. -- Finlay McWalter ☻ Talk 19:52, 3 June 2011 (UTC)
- Well, those companies are not very successful. Copyright infringements are omnipresent throughout the web. I defy you to name any remotely popular movie or music album which I or anyone couldn't find inside of five seconds, ready to download at several megabytes per second. I wouldn't have formed the idea of such a crowdsourced system if the companies dedicated to that task had any actual success at fighting copyright infringement. --195.14.222.40 (talk) 20:33, 3 June 2011 (UTC)
- As you say, the trouble isn't finding the stuff. The bottleneck, and the expense, is the legal backend. -- Finlay McWalter ☻ Talk 20:39, 3 June 2011 (UTC)
- Is there currently a bottleneck? Doesn't feel like it, tbh.
- At any rate, thanks for your insightful replies! Since I don't know the first thing about setting up a business or a website, I guess I'll have to leave it to someone else... :) --195.14.222.40 (talk) 20:41, 3 June 2011 (UTC)
- The point Finlay is trying to make is that finding infringement is trivial. Google works pretty dang well for that. Therefore the "difficult" parts probably lie elsewhere: issues with actual implementation, jurisdiction, legal fees, and so on. --Mr.98 (talk) 20:46, 3 June 2011 (UTC)
- Ah, ok. I was making a point about the inefficiency of those companies dedicated to fighting copyright infringement.
- As to Finlay McWalter's point, I'm simply talking about a frontend for volunteers to report infringements. Everything else would be up to the copyright owners themselves. So I don't actually see where such a bottleneck or problem would come in play, except for setting up that frontend reporting website and convincing the large copyright holders (record companies, film studios etc) to join in (for free, no less, since my idea is not to make money but rather the question why such a system has not long since been established). --195.14.222.40 (talk) 20:54, 3 June 2011 (UTC)
- (edit conflict)The trouble, from the copyright enforcer's perspective, is that the violators aren't large, readily-sued companies with stable operations. They're tens of thousands of virtually judgment proof tweenagers and their ordinary parents. DMCA filings are mostly ineffective, as filing one takes days and all you've done is prevent a single person (one of those tens of thousand) from sharing. That's why copyright owners have worked so hard to get laws adopted (in places like France and the UK) which put more of the burden on ISPs, and which aim disconnect persistent file sharers, or criminalise them. I doubt these will work either. -- Finlay McWalter ☻ Talk 20:57, 3 June 2011 (UTC)
the violators aren't large, readily-sued companies with stable operations -- Well, like I said above, my reporting system would be most suited to targeting infringements hosted on forums, filehosting services and streaming websites (I believe that these are particularly harmful due to their ease of use).
DMCA filings are mostly ineffective -- I don't believe they are. Look at how long copyright infringements are online. Files uploaded to some filehosting service are oftentimes up for years without the copyright holder (or anyone working on their behalf) ever catching on. In such cases, many downloads could be prevented. Needless to say, my system couldn't root out infringement. But it might make it a tad harder and more uncomfortable to share and especially to find and download files. What makes the current situation so incredible is that anyone can download or stream material. Web-savvy people will of course always find ways to share. But the masses need an easy avenue, and my system could very well work as a roadblock for many less apt web users, by reducing response times of the copyright holders (and by alerting them to many infringements in the first place). --195.14.222.40 (talk) 21:18, 3 June 2011 (UTC)
- Oh boy. A system that encourages people snitch out their friends, neighbors, and employers! That's never brought out the worst of humanity!
- Besides, in all seriousness. I'm not sure it would help nearly as much as you think. The biggest offenders here are well known. Fox doesn't need your help to know that there are Simpson's clips on Youtube, or that dozens of "transcripts" of every episode are available just by googling for them. APL (talk) 21:51, 3 June 2011 (UTC)
- A system that encourages people snitch out their friends, neighbors, and employers -- Why are you putting a bad spin on combatting illegal activities? Anyway, my idea is mostly about reporting files hosted illegally on the web. If it happens to have been uploaded by one of my --in that case-- ex-friends, too bad for them. I don't share this very childish "us vs them" mentality.
- doesn't need your help -- Hm, that's what I was wondering about. Apparently, they do need help (which makes me wonder how serious the big copyright holders actually are about fighting infringement). --87.78.136.233 (talk) 22:30, 3 June 2011 (UTC)
- The issues regarding copyright laws in the United States are quite complicated. You might check out Lawrence Lessig's Free Culture for some great examples of that. It's a serious work written by a highly-respected law professor. It's available for free at his website. There are a lot of people who believe that the current state of US copyright law is an aberration brought about by excessive legislative pandering to a few major media companies. Whether you agree with it or not, you should be aware that a significant number of people online — including many who are very tech savvy — would see your site as a place for "snitching". --Mr.98 (talk) 23:01, 3 June 2011 (UTC)
- Oh sure. As a practical matter, you might as well put up a giant sign that says "All hacker groups welcome to attack this site!". APL (talk) 08:18, 4 June 2011 (UTC)
- Copyright violation is a civil matter between the copyright holder and the violator. I strongly question the motives of any 3rd party who wants to get involved.
- People don't snitch because they're interested in Truth, Justice, And the American Way. They do it either because they want to get someone they already are angry at, they do it some reword or favoritism from the authority they're snitching to, or they do it because they get a sick thrill from pretending to be an authority figure. In my opinion, any of those three makes you a bad person. APL (talk) 08:18, 4 June 2011 (UTC)
- The issues regarding copyright laws in the United States are quite complicated. You might check out Lawrence Lessig's Free Culture for some great examples of that. It's a serious work written by a highly-respected law professor. It's available for free at his website. There are a lot of people who believe that the current state of US copyright law is an aberration brought about by excessive legislative pandering to a few major media companies. Whether you agree with it or not, you should be aware that a significant number of people online — including many who are very tech savvy — would see your site as a place for "snitching". --Mr.98 (talk) 23:01, 3 June 2011 (UTC)
�:You could certainly set up the site as described — there's no bottleneck there. Whether people would want to participate, I have no idea. But I wouldn't expect it to make any difference in actual prosecution of copyright infringements. Again, the issue here is not that the companies cannot find instances of infringement. If you and I can do it within seconds of a Google search, so can they. If you can use a Torrent website, so can they. The issue isn't that the companies don't know that there are instances of infringement out there. --Mr.98 (talk) 21:52, 3 June 2011 (UTC)
- But then why on god's green earth don't they move much quicker? I mean, there's even a "report copyright violation" button at some meta search engines. Why don't they do that? Maybe that's what I actually can't wrap my head around about this whole thing. What the heck are they doing all day, complaining about dropping revenues, when they could be systematically searching the internet and demand removal of illegally hosted copies of their material? --87.78.136.233 (talk) 22:30, 3 June 2011 (UTC)
- The main problem is that copyright infringement, even though it is easily locatable on the Internet, occurs on such a massive scale that it would require a very, very large staff to even feel like you're maybe making an impact on it. This is true with or without your reporting mechanism — as you know, even without the reporting mechanism, it would take a copyright owner only a few seconds to start finding dozens or hundreds of infringements. Many companies publicize e-mail addresses to which you can send reports of copyright infringement, but I'm 0 for 1 with reports to those companies: Once I was irritated at a brazen Craigslist posting in which a guy was selling DVDs full of PSP games for US$40. I was so annoyed that I e-mailed Sony's "stop piracy" e-mail address to report the post. Their response? Two weeks later I got an e-mail from a Sony person who said they had just investigated and found that the posting had expired, so they were closing the case. Bravo, Sony! And this was from one of the RIAA members, who are more aggressive against piracy than most other companies. Comet Tuttle (talk) 22:53, 3 June 2011 (UTC)
- And having it centralized would still not cut out the need for a staff to vet, file letters, etc. (And again, all of this is jurisdictional specific anyway. DMCA only applies to content hosted on American servers.) --Mr.98 (talk) 23:01, 3 June 2011 (UTC)
- Thank you all for your helpful responses! 98, I'll look into Lawrence Lessig's book, thanks for the pointer. Comet Tuttle, yeah, I've had similar experiences, not with Sony but other major players -- most of whom never bothered to respond or take any action. --87.78.136.233 (talk) 23:08, 3 June 2011 (UTC)
Why does Internet Explorer alway's say, " Internet Explorer cannot display the webpage?"
Whenever I want to go to Facebook, Google, Microsoft.com, Internet Explorer alway's say " Internet Explorer cannot display the webpage with Diagnose Connection Problems and More Information on the bottom. It constantly says that.
I have att dial-up internet because I live in the country. Nothing's wrong with the modem, or anything. I think it's just the att dial-up internet connection that's causing this.
I also have Internet Explorer 8. I'm right now downloading Internet Explorer 9. I don't know if this internet will help.
I've tried diagnosing connection problems, refreshing the page, and everything else.
What is wrong???
Please help me :/
thank you =) — Preceding unsigned comment added by ShadowAngel1994 (talk • contribs) 22:55, 3 June 2011 (UTC)
- To clarify, have you tried using Mozilla Firefox or some other web browser? Do your other Internet-using applications work, such as Skype, web browsers, or Internet games? Comet Tuttle (talk) 23:06, 3 June 2011 (UTC)
- I'd try making sure you don't have any proxies filled in. To do this go into the browser options (Press Alt-T and then Options whilst running Internet Explorer) and then select the connection tab. From here click LAN settings. Normally you would only have "Automatically detect settings" ticked, but to be safe I'd just try unticking everything and then pressing OK. You shouldn't need to restart your browser, but I would just in case and see if it has made a difference. ZX81 00:04, 4 June 2011 (UTC)
cable modem dropping and wired/wireless router
I have a cable modem with a wired/wireless router. There are four computers wired to the router and we have four wireless devices. I've been having a lot of problems with the cable modem dropping - a couple of the lights will go out but it will come back in 1-2 minutes. The cable company says that I am getting a strong signal and thinks that the router is "pushing off" the modem. The signal goes to the modem before it goes to the router,so I don't see how the router could be causing the modem to drop. Is that possible/likely? Bubba73 23:22, 3 June 2011 (UTC)
- It is possible. It's important to note that the actual Internet service passes through the modem to the router. I've seen cases where the MTUs were set too high on a router and caused the modem to kick offline every so often. That being said, cable plant problems are sometimes hard to figure out and narrow down - the signal may look fine at a glance from their office, but there may still be problems with the service. (a lot of T3/T4 timeouts on the modem, High Forward error corrections, low SNR on the cable node, et cetera) ennasis @ 01:04, 2 Sivan 5771 / 4 June 2011 (UTC)
- About 9 - 14 days ago I had this problem bad, Then one morning it was totally down for a while, but when it came back the problem was much better. Then today I was dropped probably nearly 20 time by 3PM. The cable company said to bypass the router and try it. I had it that way for a few hours and saw it drop only once. Then I put the router back in and it dropped twice in short order, but I haven't seen it drop since. Bubba73 02:03, 4 June 2011 (UTC)
- (It dropped again.) Can a normal user change the MTUs? Bubba73 02:19, 4 June 2011 (UTC)
- On most recent routers, yes. There should be some way to change them - you'd have to look at the support section in the manual/manufacturer's website to see how, since it's different for all of them. It should be pretty easy once you find the instructions. ennasis @ 03:44, 2 Sivan 5771 / 4 June 2011 (UTC)
- How do I know what they should be set at? Bubba73 04:29, 4 June 2011 (UTC)
- Usually either the ISP or the Router OEM have a suggestion. Failing that, I usually recommend 1500, and if it keep dropping, lower it by 100 each time. If you get to 900 and it's still dropping, then that's not your problem, and you can change it back to the default and call your ISP again. ennasis @ 05:05, 2 Sivan 5771 / 4 June 2011 (UTC)
Does it need to be 64-bit to use both cores?
I'm using 32-bit software on 32-bit Windows on a processor with two cores but a program of interest is using only one of those cores. Does it need to be 64-bit to use both cores? — Preceding unsigned comment added by 129.215.47.59 (talk) 23:46, 3 June 2011 (UTC)
- No and I'm afraid there's nothing you can do, the software needs to be specifically written/changed so that it actually actually uses both cores. ZX81 00:01, 4 June 2011 (UTC)
- To clarify, those are two seperate issues. A program must be multi-threaded to use more than one core. Whether or not it is multi-threaded in any useful way is unrelated to whether it's a 32 or 64 bit program. APL (talk) 00:14, 4 June 2011 (UTC)
- To clarify even further, "...there's nothing you can do..." unless you are a software programmer who has access to the program's source-code, and can meaningfully find some exploitable parallelism in the features you're using. Nimur (talk) 00:25, 4 June 2011 (UTC)
- To clarify, those are two seperate issues. A program must be multi-threaded to use more than one core. Whether or not it is multi-threaded in any useful way is unrelated to whether it's a 32 or 64 bit program. APL (talk) 00:14, 4 June 2011 (UTC)
- And even further, you won't have luck running a 64-bit program on a 32-bit machine regardless of how many cores it has. -- kainaw™ 00:44, 4 June 2011 (UTC)
- Two 32-bit cores = 64 bits! :-) Bubba73 02:46, 4 June 2011 (UTC)
- ...And by the same logic, my new i7 950 64-bit Hyperthreaded Quad-core PC is 64 x 2 x 4 = 512-bit architecture! ;-) Nope, as has already been pointed out, it don't work like that, though running the same (single-thread 32-bit) application on an operating system that understands multiple cores may help a little, by running background tasks on the other core. AndyTheGrump (talk) 03:05, 4 June 2011 (UTC)
- I have heard people say that two 2-GHZ = 4GHz. Bubba73 04:47, 4 June 2011 (UTC)
- Even if you're a software engineer, you might not be able to do anything. Making use of multiple cores is still an ongoing research problem. Paul (Stansifer) 08:18, 4 June 2011 (UTC)