Resolved access a txt file and change individual fields?

sock1992

Well-known member
Joined
May 20, 2020
Messages
107
Programming Experience
Beginner
I've managed to change individual fields but then end up deleting everything else in the file. I only want to edit one field. Any help would be greatly appreciated. Thankyou

Heres my current code:

C#:
  public static void UpdatePatientData()
        {
            string detail, choice;
       
            Console.WriteLine("Please enter the ID of the customer you would like to Update: ");
            ID = Console.ReadLine();

            string[] Name = File.ReadAllText("user.txt").Split('|'); // each field will represent an index from 1-4

            for (int i = 0; i < Name.Length; i++)
            {
                if (Name[i].Contains(ID))
                {
                    // here is where i would like to edit the name/gender/age etc and write it back to the file

                }
            }
        }
 
Last edited:
Here is a blunt but quick rewrite of your code using Json. I prefer this than using the CSV recommendation by @jmcilhinney personally, but that's just me.
I would certainly have no issue with JSON. I just mentioned CSV because it appeared to be what the OP was already using.
 
I would certainly have no issue with JSON. I just mentioned CSV because it appeared to be what the OP was already using.
I would have no problem with a CSV file either. But that's not why I did what I did and essentially re-wrote the example as I did using a Json file. I was following what the OP originally wanted to do :
I've managed to change individual fields but then end up deleting everything else in the file. I only want to edit one field. Any help would be greatly appreciated. Thankyou
If the OP only wants to edit one field. Well, if the OP jumps over to using JSon, they can read the whole Json file back, and change the data for a given patient as I've already demonstrated. The matching of which patient to edit is done in line 31/43 on p1/#13.

On a side note, I am not seen the relevance of the point Skydiver is saying about file size when this was never brought up previously to be an issue by the OP.
 
Last edited:
With JSon VS CSV... the CSV, you will be likely using a delimiter etc, splitting and splicing and concatenating etc. But with JSon, you can read each patient to a class object and then store that patient class object into their own position in a dictionary. Using the ID of the patient as the key for the dictionary<int, Patient>, thus makes the data flow much more transient than looping, splitting and joining as you would with a CSV file. Anyway, I never said the OP "must" use it. :)

Night night now!
 
On a side note, I am not seen the relevance of the point Skydiver is saying about file size when this was never brought up previously to be an issue by the OP.
In post #1, the problem the OP had was how to replace just a single field without overwriting or deleting the rest of the file. Your claim was that just by using a JSON file format, that problem can be solved. Unfortunately, JSON is a plain text file format, just like a CSV, or some variable sized delimited file. I'm trying to show that there is no getting around it... The file needs to be overwitten just like the way you are overwriting the file with:
C#:
File.WriteAllText(Path.Combine(PathToDBFile, FileName), JsonConvert.SerializeObject(Patients, Formatting.Indented));
on line 28 of post #13.

There isn't a StreamWriter implementation (yet) that lets you insert extra characters in the middle of an existing stream and not touch the rest of the file data that has already been committed to disk. Yes, with a StreamWriter, you could replace/overwrite part of the committed data, but you would be limited to the space that the file is already using.
 
Last edited:
Seems very counterproductive does it not?
Was my use of data creation not sufficient?
There are times when you need to make a safe atomic file replacement. Using File.Move() let's you accomplish that. When you don't know if there will be anybody else reading the same file that you are writing, then there are two approaches: Lock the file, or write to another file, and then swap.

Locking requires that both the reader and the writer perform locks whenever they are working to ensure that they are working with consistent data. That requires coordination as well as passing in the appropriate lock flags. (Although file locks on network shares /SMB supposedly work now, I still have my doubts considering that some database implementations still discourage storing database files on file shares if multiuser access is anticipated.)

The other approach of doing the temporary file and then swapping also works (at least on NTFS) even if there is a reader currently still reading the original file. This is how Windows Update performs patching so that programs can still continue running even if the DLL has been mapped into process space. The new DLL version won't be read in until after stopping the process.
 
A simple CSV file looks like this :
C#:
John,Doe,120 jefferson st.,Riverside, NJ, 08075
Jack,McGinnis,220 hobo Av.,Phila, PA,09119
"John ""Da Man""",Repici,120 Jefferson St.,Riverside, NJ,08075
What does required space have to do with anything? OP never said anything about the file size needing to be a specific file size. A CSV file is simply nothing more than text data, not binary. Plus CSV files have a shitty layout where you need to slice at a delimiter, if you want to change a specific section of a CSV file.
In JSon, it is laid out much easier, and it makes it easier to edit a specific line without needing to break the line down into an Array of data delimited by a comma or some other delimiter.

Also note that a JSon file also has each field on a new line and in a structured manner and this makes it easier to work with. And given the original poster isn't claiming to hold tons of data in here, reading back the whole file using the example I provided does exactly what they wanted to do but better, and when it overwrites it reads the original data back in before making a change to that data before publishing the new files contents. Ex :
JSON:
{
  "0": {
    "ID": 0,
    "Name": "Tom",
    "Age": 33,
    "Gender": "Female",
    "Address": "999 Lala land",
    "PhoneNumber": "086997513"
  },
  "1": {
    "ID": 1,
    "Name": "Bob",
    "Age": 36,
    "Gender": "Male",
    "Address": "888 Home Vila St.",
    "PhoneNumber": "052895567"
  }
}
This is much easier to loop each of the lines and look for the ID of a patient or the patient name of the patient and change it with some simple Linq syntax. We know each patient has six fields, so its not rocket science to know there is a new patient between every six lines or so. Providing you skip the first line of the file. Looping over each line would allow you to replace any data without reading the whole file, but if you are going to loop each line, you may as well just read the whole thing back into JSon anyway. We will have to agree to disagree, as It's my opinion that this is easier to work with than any CSV file.
There are times when you need to make a safe atomic file replacement. Using File.Move() let's you accomplish that. When you don't know if there will be anybody else reading the same file that you are writing, then there are two approaches: Lock the file, or write to another file, and then swap.
Firstly, if someone isn't meant to be reading sensitive data, then maybe they shouldn't be storing it in a text file in the first place. And a database with user access would be a more practical approach. At the end of the day, there are three ways to do something. The right way, the wrong way, and the way that I do it. And you don't have to like the way that I do it, but the OP wanted to use a text file for the data storage, and that was my choice to use JSon as a personal preference to the above suggested CSV method. It was also my choice to rewrite the code they had, because I didn't like it - this was also my choice, and I never said they must use it. That's up to them.
 
Lastly, the reason I wrote it as I did, was because unless you know what line (number) of data to change, then you can't use Linq.Skip to jump directly to that line. You could write a separate file to ascertain where any such data is stored line by line in the Json file, but that was just as counterproductive as using two files as the OP already is doing with their temp file. So I didn't do that... That doesn't mean that it can't be done. I'll rest my case.

Don't forget, .Skip uses an enumerator under the hood. So you are still looping line by line. Which would have been as practical as reading the file back using JSon. This does negate my original claim that it can't be done. I just found when writing it, it also makes it counterproductive to jump to another line to alter it, when it's equally as quick to read and rewrite the file as I decided to do.
 
:ROFLMAO: Well id like to say thank you to both of you for taking time to reply to my messages, highly appreciated!...I'm currently working on a console application and I've now got everything working the way that I want it to using a CSV file.

I'm a complete beginner and haven't used JSON files before, hence why I was using a txt file to begin with.

From looking at the code Sheepings provided, JSON does look easier to work with. I'll spend some time looking into JSON and see if I can change some of my code around (y)
 
JSON and XML are easier to work with because of extensive support of serialization/deserialization in these formats built into .NET Core, and/or .NET Framework in combination with Newtonsoft's ubiquitous JSON.NET. It is this serialization/deserialization that makes it look easy. Under the covers they still do the actual hard work of reading and populating objects, and writing out the objects.

For CSVs, as mentioned very early in this thread, there is reading support in the form of the FieldParser class that is built into the .NET Framework, but you are on your own from writing out. And the reading support only goes as far of giving you field data. Populating objects is still your job.
 
Good to see you found it helpful. However, you really should consider what Skydiver said at the start of the topic, and use a database for practicality. There are a range of database options you can use. Look up Sql compact database. I believe its deprecated now but still usable, and would serve your project well.
 
There has been times when I've given some thought to this (creating simple data stores for small data).

I've wondered if there might be a good approach using Memory-Mapped Files. It would work across multiple processes/threads.
 
I think someone has already ported C-ISAM to C#, but that maybe a good start if you are really interested. An alternative is porting BerkleyDB into C#, but that will be much more complex.
 
Back
Top Bottom